00:00:00.001 Started by upstream project "autotest-nightly" build number 4274 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3637 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.151 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.152 The recommended git tool is: git 00:00:00.152 using credential 00000000-0000-0000-0000-000000000002 00:00:00.154 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-uring-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.200 Fetching changes from the remote Git repository 00:00:00.202 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.242 Using shallow fetch with depth 1 00:00:00.242 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.242 > git --version # timeout=10 00:00:00.276 > git --version # 'git version 2.39.2' 00:00:00.276 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.302 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.302 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.535 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.547 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.560 Checking out Revision b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf (FETCH_HEAD) 00:00:07.560 > git config core.sparsecheckout # timeout=10 00:00:07.572 > git read-tree -mu HEAD # timeout=10 00:00:07.589 > git checkout -f b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=5 00:00:07.608 Commit message: "jenkins/jjb-config: Ignore OS version mismatch under freebsd" 00:00:07.608 > git rev-list --no-walk b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf # timeout=10 00:00:07.727 [Pipeline] Start of Pipeline 00:00:07.740 [Pipeline] library 00:00:07.741 Loading library shm_lib@master 00:00:07.741 Library shm_lib@master is cached. Copying from home. 00:00:07.753 [Pipeline] node 00:00:07.765 Running on VM-host-SM9 in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:00:07.767 [Pipeline] { 00:00:07.776 [Pipeline] catchError 00:00:07.777 [Pipeline] { 00:00:07.788 [Pipeline] wrap 00:00:07.795 [Pipeline] { 00:00:07.801 [Pipeline] stage 00:00:07.802 [Pipeline] { (Prologue) 00:00:07.816 [Pipeline] echo 00:00:07.818 Node: VM-host-SM9 00:00:07.823 [Pipeline] cleanWs 00:00:07.832 [WS-CLEANUP] Deleting project workspace... 00:00:07.832 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.837 [WS-CLEANUP] done 00:00:08.017 [Pipeline] setCustomBuildProperty 00:00:08.100 [Pipeline] httpRequest 00:00:08.745 [Pipeline] echo 00:00:08.747 Sorcerer 10.211.164.20 is alive 00:00:08.756 [Pipeline] retry 00:00:08.757 [Pipeline] { 00:00:08.767 [Pipeline] httpRequest 00:00:08.770 HttpMethod: GET 00:00:08.771 URL: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.771 Sending request to url: http://10.211.164.20/packages/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:08.773 Response Code: HTTP/1.1 200 OK 00:00:08.773 Success: Status code 200 is in the accepted range: 200,404 00:00:08.774 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:09.807 [Pipeline] } 00:00:09.823 [Pipeline] // retry 00:00:09.831 [Pipeline] sh 00:00:10.113 + tar --no-same-owner -xf jbp_b9dd3f7ec12b0ee8a44940dc99ce739345caa4cf.tar.gz 00:00:10.128 [Pipeline] httpRequest 00:00:11.354 [Pipeline] echo 00:00:11.355 Sorcerer 10.211.164.20 is alive 00:00:11.365 [Pipeline] retry 00:00:11.367 [Pipeline] { 00:00:11.381 [Pipeline] httpRequest 00:00:11.386 HttpMethod: GET 00:00:11.387 URL: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:11.387 Sending request to url: http://10.211.164.20/packages/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:00:11.405 Response Code: HTTP/1.1 200 OK 00:00:11.406 Success: Status code 200 is in the accepted range: 200,404 00:00:11.407 Saving response body to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:27.057 [Pipeline] } 00:01:27.076 [Pipeline] // retry 00:01:27.084 [Pipeline] sh 00:01:27.364 + tar --no-same-owner -xf spdk_83e8405e4c25408c010ba2b9e02ce45e2347370c.tar.gz 00:01:30.656 [Pipeline] sh 00:01:30.940 + git -C spdk log --oneline -n5 00:01:30.941 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:01:30.941 0eab4c6fb nvmf/fc: Validate the ctrlr pointer inside nvmf_fc_req_bdev_abort() 00:01:30.941 4bcab9fb9 correct kick for CQ full case 00:01:30.941 8531656d3 test/nvmf: Interrupt test for local pcie nvme device 00:01:30.941 318515b44 nvme/perf: interrupt mode support for pcie controller 00:01:31.000 [Pipeline] writeFile 00:01:31.009 [Pipeline] sh 00:01:31.282 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:31.293 [Pipeline] sh 00:01:31.571 + cat autorun-spdk.conf 00:01:31.571 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.571 SPDK_TEST_NVMF=1 00:01:31.571 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.571 SPDK_TEST_URING=1 00:01:31.571 SPDK_TEST_VFIOUSER=1 00:01:31.571 SPDK_TEST_USDT=1 00:01:31.571 SPDK_RUN_ASAN=1 00:01:31.571 SPDK_RUN_UBSAN=1 00:01:31.571 NET_TYPE=virt 00:01:31.571 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.578 RUN_NIGHTLY=1 00:01:31.580 [Pipeline] } 00:01:31.591 [Pipeline] // stage 00:01:31.604 [Pipeline] stage 00:01:31.606 [Pipeline] { (Run VM) 00:01:31.617 [Pipeline] sh 00:01:31.896 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:31.896 + echo 'Start stage prepare_nvme.sh' 00:01:31.896 Start stage prepare_nvme.sh 00:01:31.896 + [[ -n 5 ]] 00:01:31.896 + disk_prefix=ex5 00:01:31.896 + [[ -n /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest ]] 00:01:31.896 + [[ -e /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf ]] 00:01:31.896 + source /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf 00:01:31.896 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:31.896 ++ SPDK_TEST_NVMF=1 00:01:31.896 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:31.896 ++ SPDK_TEST_URING=1 00:01:31.896 ++ SPDK_TEST_VFIOUSER=1 00:01:31.896 ++ SPDK_TEST_USDT=1 00:01:31.896 ++ SPDK_RUN_ASAN=1 00:01:31.896 ++ SPDK_RUN_UBSAN=1 00:01:31.896 ++ NET_TYPE=virt 00:01:31.896 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:31.896 ++ RUN_NIGHTLY=1 00:01:31.896 + cd /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:31.896 + nvme_files=() 00:01:31.896 + declare -A nvme_files 00:01:31.896 + backend_dir=/var/lib/libvirt/images/backends 00:01:31.896 + nvme_files['nvme.img']=5G 00:01:31.896 + nvme_files['nvme-cmb.img']=5G 00:01:31.896 + nvme_files['nvme-multi0.img']=4G 00:01:31.896 + nvme_files['nvme-multi1.img']=4G 00:01:31.896 + nvme_files['nvme-multi2.img']=4G 00:01:31.896 + nvme_files['nvme-openstack.img']=8G 00:01:31.896 + nvme_files['nvme-zns.img']=5G 00:01:31.896 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:31.896 + (( SPDK_TEST_FTL == 1 )) 00:01:31.896 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:31.896 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:31.896 + for nvme in "${!nvme_files[@]}" 00:01:31.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi2.img -s 4G 00:01:31.896 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:31.896 + for nvme in "${!nvme_files[@]}" 00:01:31.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-cmb.img -s 5G 00:01:31.896 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:31.896 + for nvme in "${!nvme_files[@]}" 00:01:31.896 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-openstack.img -s 8G 00:01:32.155 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.155 + for nvme in "${!nvme_files[@]}" 00:01:32.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-zns.img -s 5G 00:01:32.155 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.155 + for nvme in "${!nvme_files[@]}" 00:01:32.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi1.img -s 4G 00:01:32.155 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.155 + for nvme in "${!nvme_files[@]}" 00:01:32.155 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme-multi0.img -s 4G 00:01:32.415 Formatting '/var/lib/libvirt/images/backends/ex5-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.415 + for nvme in "${!nvme_files[@]}" 00:01:32.415 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex5-nvme.img -s 5G 00:01:32.415 Formatting '/var/lib/libvirt/images/backends/ex5-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.415 ++ sudo grep -rl ex5-nvme.img /etc/libvirt/qemu 00:01:32.673 + echo 'End stage prepare_nvme.sh' 00:01:32.673 End stage prepare_nvme.sh 00:01:32.684 [Pipeline] sh 00:01:32.965 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:32.965 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex5-nvme.img -b /var/lib/libvirt/images/backends/ex5-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img -H -a -v -f fedora39 00:01:32.965 00:01:32.965 DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant 00:01:32.965 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk 00:01:32.965 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:32.965 HELP=0 00:01:32.965 DRY_RUN=0 00:01:32.965 NVME_FILE=/var/lib/libvirt/images/backends/ex5-nvme.img,/var/lib/libvirt/images/backends/ex5-nvme-multi0.img, 00:01:32.965 NVME_DISKS_TYPE=nvme,nvme, 00:01:32.965 NVME_AUTO_CREATE=0 00:01:32.965 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex5-nvme-multi1.img:/var/lib/libvirt/images/backends/ex5-nvme-multi2.img, 00:01:32.965 NVME_CMB=,, 00:01:32.965 NVME_PMR=,, 00:01:32.965 NVME_ZNS=,, 00:01:32.965 NVME_MS=,, 00:01:32.965 NVME_FDP=,, 00:01:32.965 SPDK_VAGRANT_DISTRO=fedora39 00:01:32.965 SPDK_VAGRANT_VMCPU=10 00:01:32.965 SPDK_VAGRANT_VMRAM=12288 00:01:32.965 SPDK_VAGRANT_PROVIDER=libvirt 00:01:32.965 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:32.965 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:32.965 SPDK_OPENSTACK_NETWORK=0 00:01:32.965 VAGRANT_PACKAGE_BOX=0 00:01:32.965 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:32.965 FORCE_DISTRO=true 00:01:32.965 VAGRANT_BOX_VERSION= 00:01:32.965 EXTRA_VAGRANTFILES= 00:01:32.965 NIC_MODEL=e1000 00:01:32.965 00:01:32.965 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt' 00:01:32.965 /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest 00:01:36.251 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.509 ==> default: Creating image (snapshot of base box volume). 00:01:36.768 ==> default: Creating domain with the following settings... 00:01:36.768 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1731806504_68008e83b3a447c0ae6d 00:01:36.768 ==> default: -- Domain type: kvm 00:01:36.768 ==> default: -- Cpus: 10 00:01:36.768 ==> default: -- Feature: acpi 00:01:36.768 ==> default: -- Feature: apic 00:01:36.768 ==> default: -- Feature: pae 00:01:36.768 ==> default: -- Memory: 12288M 00:01:36.769 ==> default: -- Memory Backing: hugepages: 00:01:36.769 ==> default: -- Management MAC: 00:01:36.769 ==> default: -- Loader: 00:01:36.769 ==> default: -- Nvram: 00:01:36.769 ==> default: -- Base box: spdk/fedora39 00:01:36.769 ==> default: -- Storage pool: default 00:01:36.769 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1731806504_68008e83b3a447c0ae6d.img (20G) 00:01:36.769 ==> default: -- Volume Cache: default 00:01:36.769 ==> default: -- Kernel: 00:01:36.769 ==> default: -- Initrd: 00:01:36.769 ==> default: -- Graphics Type: vnc 00:01:36.769 ==> default: -- Graphics Port: -1 00:01:36.769 ==> default: -- Graphics IP: 127.0.0.1 00:01:36.769 ==> default: -- Graphics Password: Not defined 00:01:36.769 ==> default: -- Video Type: cirrus 00:01:36.769 ==> default: -- Video VRAM: 9216 00:01:36.769 ==> default: -- Sound Type: 00:01:36.769 ==> default: -- Keymap: en-us 00:01:36.769 ==> default: -- TPM Path: 00:01:36.769 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:36.769 ==> default: -- Command line args: 00:01:36.769 ==> default: -> value=-device, 00:01:36.769 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:36.769 ==> default: -> value=-drive, 00:01:36.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme.img,if=none,id=nvme-0-drive0, 00:01:36.769 ==> default: -> value=-device, 00:01:36.769 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.769 ==> default: -> value=-device, 00:01:36.769 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:36.769 ==> default: -> value=-drive, 00:01:36.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:36.769 ==> default: -> value=-device, 00:01:36.769 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.769 ==> default: -> value=-drive, 00:01:36.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:36.769 ==> default: -> value=-device, 00:01:36.769 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.769 ==> default: -> value=-drive, 00:01:36.769 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex5-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:36.769 ==> default: -> value=-device, 00:01:36.769 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:36.769 ==> default: Creating shared folders metadata... 00:01:36.769 ==> default: Starting domain. 00:01:38.197 ==> default: Waiting for domain to get an IP address... 00:01:56.281 ==> default: Waiting for SSH to become available... 00:01:56.281 ==> default: Configuring and enabling network interfaces... 00:01:58.833 default: SSH address: 192.168.121.7:22 00:01:58.833 default: SSH username: vagrant 00:01:58.833 default: SSH auth method: private key 00:02:01.366 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:09.482 ==> default: Mounting SSHFS shared folder... 00:02:10.050 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:10.050 ==> default: Checking Mount.. 00:02:11.426 ==> default: Folder Successfully Mounted! 00:02:11.426 ==> default: Running provisioner: file... 00:02:11.993 default: ~/.gitconfig => .gitconfig 00:02:12.252 00:02:12.252 SUCCESS! 00:02:12.252 00:02:12.252 cd to /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:12.252 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:12.252 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:12.252 00:02:12.261 [Pipeline] } 00:02:12.278 [Pipeline] // stage 00:02:12.289 [Pipeline] dir 00:02:12.289 Running in /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/fedora39-libvirt 00:02:12.291 [Pipeline] { 00:02:12.304 [Pipeline] catchError 00:02:12.307 [Pipeline] { 00:02:12.320 [Pipeline] sh 00:02:12.599 + vagrant ssh-config --host vagrant 00:02:12.599 + sed -ne /^Host/,$p 00:02:12.599 + tee ssh_conf 00:02:15.887 Host vagrant 00:02:15.887 HostName 192.168.121.7 00:02:15.887 User vagrant 00:02:15.887 Port 22 00:02:15.887 UserKnownHostsFile /dev/null 00:02:15.887 StrictHostKeyChecking no 00:02:15.887 PasswordAuthentication no 00:02:15.887 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:15.887 IdentitiesOnly yes 00:02:15.887 LogLevel FATAL 00:02:15.887 ForwardAgent yes 00:02:15.887 ForwardX11 yes 00:02:15.887 00:02:15.900 [Pipeline] withEnv 00:02:15.903 [Pipeline] { 00:02:15.917 [Pipeline] sh 00:02:16.200 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:16.200 source /etc/os-release 00:02:16.200 [[ -e /image.version ]] && img=$(< /image.version) 00:02:16.200 # Minimal, systemd-like check. 00:02:16.200 if [[ -e /.dockerenv ]]; then 00:02:16.200 # Clear garbage from the node's name: 00:02:16.200 # agt-er_autotest_547-896 -> autotest_547-896 00:02:16.200 # $HOSTNAME is the actual container id 00:02:16.200 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:16.200 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:16.200 # We can assume this is a mount from a host where container is running, 00:02:16.200 # so fetch its hostname to easily identify the target swarm worker. 00:02:16.200 container="$(< /etc/hostname) ($agent)" 00:02:16.200 else 00:02:16.200 # Fallback 00:02:16.200 container=$agent 00:02:16.200 fi 00:02:16.200 fi 00:02:16.200 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:16.200 00:02:16.469 [Pipeline] } 00:02:16.486 [Pipeline] // withEnv 00:02:16.496 [Pipeline] setCustomBuildProperty 00:02:16.513 [Pipeline] stage 00:02:16.515 [Pipeline] { (Tests) 00:02:16.534 [Pipeline] sh 00:02:16.814 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:17.087 [Pipeline] sh 00:02:17.369 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:17.644 [Pipeline] timeout 00:02:17.645 Timeout set to expire in 1 hr 0 min 00:02:17.647 [Pipeline] { 00:02:17.665 [Pipeline] sh 00:02:17.948 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:18.515 HEAD is now at 83e8405e4 nvmf/fc: Qpair disconnect callback: Serialize FC delete connection & close qpair process 00:02:18.528 [Pipeline] sh 00:02:18.810 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:19.083 [Pipeline] sh 00:02:19.362 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:19.636 [Pipeline] sh 00:02:19.914 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=nvmf-tcp-uring-vg-autotest ./autoruner.sh spdk_repo 00:02:19.914 ++ readlink -f spdk_repo 00:02:19.914 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:19.914 + [[ -n /home/vagrant/spdk_repo ]] 00:02:19.914 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:19.914 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:19.914 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:19.914 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:19.914 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:19.914 + [[ nvmf-tcp-uring-vg-autotest == pkgdep-* ]] 00:02:19.914 + cd /home/vagrant/spdk_repo 00:02:19.914 + source /etc/os-release 00:02:19.914 ++ NAME='Fedora Linux' 00:02:19.914 ++ VERSION='39 (Cloud Edition)' 00:02:19.914 ++ ID=fedora 00:02:19.914 ++ VERSION_ID=39 00:02:19.914 ++ VERSION_CODENAME= 00:02:19.914 ++ PLATFORM_ID=platform:f39 00:02:19.914 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:19.914 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:19.914 ++ LOGO=fedora-logo-icon 00:02:19.914 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:19.914 ++ HOME_URL=https://fedoraproject.org/ 00:02:19.914 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:19.914 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:19.914 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:19.914 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:19.914 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:19.914 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:19.914 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:19.914 ++ SUPPORT_END=2024-11-12 00:02:19.914 ++ VARIANT='Cloud Edition' 00:02:19.914 ++ VARIANT_ID=cloud 00:02:19.914 + uname -a 00:02:20.172 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:20.172 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:20.430 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:20.430 Hugepages 00:02:20.430 node hugesize free / total 00:02:20.430 node0 1048576kB 0 / 0 00:02:20.430 node0 2048kB 0 / 0 00:02:20.430 00:02:20.430 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:20.689 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:20.689 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:20.689 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:20.689 + rm -f /tmp/spdk-ld-path 00:02:20.689 + source autorun-spdk.conf 00:02:20.689 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.689 ++ SPDK_TEST_NVMF=1 00:02:20.689 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.689 ++ SPDK_TEST_URING=1 00:02:20.689 ++ SPDK_TEST_VFIOUSER=1 00:02:20.689 ++ SPDK_TEST_USDT=1 00:02:20.689 ++ SPDK_RUN_ASAN=1 00:02:20.689 ++ SPDK_RUN_UBSAN=1 00:02:20.689 ++ NET_TYPE=virt 00:02:20.689 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.689 ++ RUN_NIGHTLY=1 00:02:20.689 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:20.689 + [[ -n '' ]] 00:02:20.689 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:20.689 + for M in /var/spdk/build-*-manifest.txt 00:02:20.689 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:20.689 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.689 + for M in /var/spdk/build-*-manifest.txt 00:02:20.689 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:20.689 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.689 + for M in /var/spdk/build-*-manifest.txt 00:02:20.689 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:20.689 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:20.689 ++ uname 00:02:20.689 + [[ Linux == \L\i\n\u\x ]] 00:02:20.689 + sudo dmesg -T 00:02:20.689 + sudo dmesg --clear 00:02:20.689 + dmesg_pid=5253 00:02:20.689 + sudo dmesg -Tw 00:02:20.689 + [[ Fedora Linux == FreeBSD ]] 00:02:20.689 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.689 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:20.689 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:20.689 + [[ -x /usr/src/fio-static/fio ]] 00:02:20.689 + export FIO_BIN=/usr/src/fio-static/fio 00:02:20.689 + FIO_BIN=/usr/src/fio-static/fio 00:02:20.689 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:20.689 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:20.690 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:20.690 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.690 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:20.690 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:20.690 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.690 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:20.690 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.690 01:22:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.690 01:22:29 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_URING=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_TEST_VFIOUSER=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_TEST_USDT=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_RUN_ASAN=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_RUN_UBSAN=1 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@9 -- $ NET_TYPE=virt 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@10 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:20.690 01:22:29 -- spdk_repo/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:20.690 01:22:29 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:20.690 01:22:29 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:20.948 01:22:29 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:20.948 01:22:29 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:20.948 01:22:29 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:20.948 01:22:29 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:20.948 01:22:29 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:20.948 01:22:29 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:20.948 01:22:29 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.948 01:22:29 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.948 01:22:29 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.948 01:22:29 -- paths/export.sh@5 -- $ export PATH 00:02:20.948 01:22:29 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:20.948 01:22:29 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:20.948 01:22:29 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:20.948 01:22:29 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1731806549.XXXXXX 00:02:20.948 01:22:29 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1731806549.i52IVA 00:02:20.948 01:22:29 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:20.948 01:22:29 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:20.948 01:22:29 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:20.948 01:22:29 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:20.948 01:22:29 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:20.948 01:22:29 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:20.948 01:22:29 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:20.948 01:22:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:20.948 01:22:29 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring' 00:02:20.948 01:22:29 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:20.948 01:22:29 -- pm/common@17 -- $ local monitor 00:02:20.948 01:22:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.948 01:22:29 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:20.948 01:22:29 -- pm/common@25 -- $ sleep 1 00:02:20.948 01:22:29 -- pm/common@21 -- $ date +%s 00:02:20.948 01:22:29 -- pm/common@21 -- $ date +%s 00:02:20.948 01:22:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731806549 00:02:20.948 01:22:29 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1731806549 00:02:20.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731806549_collect-cpu-load.pm.log 00:02:20.948 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1731806549_collect-vmstat.pm.log 00:02:21.885 01:22:30 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:21.885 01:22:30 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:21.885 01:22:30 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:21.885 01:22:30 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:21.885 01:22:30 -- spdk/autobuild.sh@16 -- $ date -u 00:02:21.885 Sun Nov 17 01:22:30 AM UTC 2024 00:02:21.885 01:22:30 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:21.885 v25.01-pre-189-g83e8405e4 00:02:21.885 01:22:30 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:21.885 01:22:30 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:21.885 01:22:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.885 01:22:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.885 01:22:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.885 ************************************ 00:02:21.885 START TEST asan 00:02:21.885 ************************************ 00:02:21.885 using asan 00:02:21.885 01:22:30 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:21.885 00:02:21.885 real 0m0.000s 00:02:21.885 user 0m0.000s 00:02:21.885 sys 0m0.000s 00:02:21.885 01:22:30 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.885 ************************************ 00:02:21.885 END TEST asan 00:02:21.885 ************************************ 00:02:21.885 01:22:30 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:21.885 01:22:30 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:21.885 01:22:30 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:21.885 01:22:30 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:21.885 01:22:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:21.885 01:22:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.885 ************************************ 00:02:21.885 START TEST ubsan 00:02:21.885 ************************************ 00:02:21.885 using ubsan 00:02:21.885 01:22:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:21.885 00:02:21.885 real 0m0.000s 00:02:21.885 user 0m0.000s 00:02:21.885 sys 0m0.000s 00:02:21.885 01:22:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:21.885 ************************************ 00:02:21.885 END TEST ubsan 00:02:21.885 ************************************ 00:02:21.885 01:22:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:22.144 01:22:30 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:22.144 01:22:30 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:22.144 01:22:30 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:22.144 01:22:30 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:22.144 01:22:30 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:22.144 01:22:30 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:22.144 01:22:30 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:22.144 01:22:30 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:22.144 01:22:30 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-vfio-user --with-uring --with-shared 00:02:22.144 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:22.144 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:22.711 Using 'verbs' RDMA provider 00:02:35.889 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:50.764 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:50.764 Creating mk/config.mk...done. 00:02:50.764 Creating mk/cc.flags.mk...done. 00:02:50.764 Type 'make' to build. 00:02:50.764 01:22:57 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:50.764 01:22:57 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:50.764 01:22:57 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:50.764 01:22:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.764 ************************************ 00:02:50.764 START TEST make 00:02:50.764 ************************************ 00:02:50.764 01:22:57 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:50.764 make[1]: Nothing to be done for 'all'. 00:02:50.764 The Meson build system 00:02:50.764 Version: 1.5.0 00:02:50.764 Source dir: /home/vagrant/spdk_repo/spdk/libvfio-user 00:02:50.764 Build dir: /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:50.764 Build type: native build 00:02:50.764 Project name: libvfio-user 00:02:50.764 Project version: 0.0.1 00:02:50.764 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:50.764 C linker for the host machine: cc ld.bfd 2.40-14 00:02:50.764 Host machine cpu family: x86_64 00:02:50.764 Host machine cpu: x86_64 00:02:50.764 Run-time dependency threads found: YES 00:02:50.764 Library dl found: YES 00:02:50.764 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:50.764 Run-time dependency json-c found: YES 0.17 00:02:50.764 Run-time dependency cmocka found: YES 1.1.7 00:02:50.764 Program pytest-3 found: NO 00:02:50.764 Program flake8 found: NO 00:02:50.764 Program misspell-fixer found: NO 00:02:50.764 Program restructuredtext-lint found: NO 00:02:50.764 Program valgrind found: YES (/usr/bin/valgrind) 00:02:50.764 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:50.764 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:50.764 Compiler for C supports arguments -Wwrite-strings: YES 00:02:50.764 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:50.764 Program test-lspci.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-lspci.sh) 00:02:50.764 Program test-linkage.sh found: YES (/home/vagrant/spdk_repo/spdk/libvfio-user/test/test-linkage.sh) 00:02:50.764 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:50.764 Build targets in project: 8 00:02:50.764 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:50.764 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:50.764 00:02:50.764 libvfio-user 0.0.1 00:02:50.764 00:02:50.764 User defined options 00:02:50.764 buildtype : debug 00:02:50.764 default_library: shared 00:02:50.764 libdir : /usr/local/lib 00:02:50.764 00:02:50.764 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:51.330 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:51.330 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:51.589 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:51.589 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:51.589 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:51.589 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:51.589 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:51.589 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:51.589 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:51.589 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:51.589 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:51.589 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:51.589 [12/37] Compiling C object samples/null.p/null.c.o 00:02:51.589 [13/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:51.589 [14/37] Compiling C object samples/server.p/server.c.o 00:02:51.589 [15/37] Compiling C object samples/client.p/client.c.o 00:02:51.589 [16/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:51.848 [17/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:51.848 [18/37] Linking target samples/client 00:02:51.848 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:51.848 [20/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:51.848 [21/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:51.848 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:51.848 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:51.848 [24/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:51.848 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:51.848 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:51.848 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:51.848 [28/37] Linking target lib/libvfio-user.so.0.0.1 00:02:51.848 [29/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:51.848 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:52.107 [31/37] Linking target test/unit_tests 00:02:52.107 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:52.107 [33/37] Linking target samples/server 00:02:52.107 [34/37] Linking target samples/shadow_ioeventfd_server 00:02:52.107 [35/37] Linking target samples/null 00:02:52.107 [36/37] Linking target samples/lspci 00:02:52.107 [37/37] Linking target samples/gpio-pci-idio-16 00:02:52.107 INFO: autodetecting backend as ninja 00:02:52.107 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:52.107 DESTDIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user meson install --quiet -C /home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug 00:02:52.675 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/build/libvfio-user/build-debug' 00:02:52.675 ninja: no work to do. 00:03:02.650 The Meson build system 00:03:02.650 Version: 1.5.0 00:03:02.650 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:02.650 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:02.650 Build type: native build 00:03:02.650 Program cat found: YES (/usr/bin/cat) 00:03:02.650 Project name: DPDK 00:03:02.650 Project version: 24.03.0 00:03:02.650 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:02.650 C linker for the host machine: cc ld.bfd 2.40-14 00:03:02.650 Host machine cpu family: x86_64 00:03:02.650 Host machine cpu: x86_64 00:03:02.650 Message: ## Building in Developer Mode ## 00:03:02.650 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:02.650 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:02.650 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:02.650 Program python3 found: YES (/usr/bin/python3) 00:03:02.650 Program cat found: YES (/usr/bin/cat) 00:03:02.650 Compiler for C supports arguments -march=native: YES 00:03:02.650 Checking for size of "void *" : 8 00:03:02.650 Checking for size of "void *" : 8 (cached) 00:03:02.650 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:02.650 Library m found: YES 00:03:02.650 Library numa found: YES 00:03:02.650 Has header "numaif.h" : YES 00:03:02.650 Library fdt found: NO 00:03:02.650 Library execinfo found: NO 00:03:02.650 Has header "execinfo.h" : YES 00:03:02.650 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:02.650 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:02.650 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:02.650 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:02.650 Run-time dependency openssl found: YES 3.1.1 00:03:02.650 Run-time dependency libpcap found: YES 1.10.4 00:03:02.650 Has header "pcap.h" with dependency libpcap: YES 00:03:02.650 Compiler for C supports arguments -Wcast-qual: YES 00:03:02.650 Compiler for C supports arguments -Wdeprecated: YES 00:03:02.650 Compiler for C supports arguments -Wformat: YES 00:03:02.650 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:02.650 Compiler for C supports arguments -Wformat-security: NO 00:03:02.650 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:02.650 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:02.650 Compiler for C supports arguments -Wnested-externs: YES 00:03:02.650 Compiler for C supports arguments -Wold-style-definition: YES 00:03:02.650 Compiler for C supports arguments -Wpointer-arith: YES 00:03:02.650 Compiler for C supports arguments -Wsign-compare: YES 00:03:02.650 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:02.650 Compiler for C supports arguments -Wundef: YES 00:03:02.650 Compiler for C supports arguments -Wwrite-strings: YES 00:03:02.650 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:02.650 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:02.650 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:02.650 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:02.650 Program objdump found: YES (/usr/bin/objdump) 00:03:02.650 Compiler for C supports arguments -mavx512f: YES 00:03:02.650 Checking if "AVX512 checking" compiles: YES 00:03:02.650 Fetching value of define "__SSE4_2__" : 1 00:03:02.650 Fetching value of define "__AES__" : 1 00:03:02.650 Fetching value of define "__AVX__" : 1 00:03:02.650 Fetching value of define "__AVX2__" : 1 00:03:02.650 Fetching value of define "__AVX512BW__" : (undefined) 00:03:02.650 Fetching value of define "__AVX512CD__" : (undefined) 00:03:02.650 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:02.650 Fetching value of define "__AVX512F__" : (undefined) 00:03:02.650 Fetching value of define "__AVX512VL__" : (undefined) 00:03:02.650 Fetching value of define "__PCLMUL__" : 1 00:03:02.650 Fetching value of define "__RDRND__" : 1 00:03:02.650 Fetching value of define "__RDSEED__" : 1 00:03:02.650 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:02.650 Fetching value of define "__znver1__" : (undefined) 00:03:02.650 Fetching value of define "__znver2__" : (undefined) 00:03:02.650 Fetching value of define "__znver3__" : (undefined) 00:03:02.650 Fetching value of define "__znver4__" : (undefined) 00:03:02.650 Library asan found: YES 00:03:02.650 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:02.650 Message: lib/log: Defining dependency "log" 00:03:02.650 Message: lib/kvargs: Defining dependency "kvargs" 00:03:02.650 Message: lib/telemetry: Defining dependency "telemetry" 00:03:02.650 Library rt found: YES 00:03:02.650 Checking for function "getentropy" : NO 00:03:02.650 Message: lib/eal: Defining dependency "eal" 00:03:02.650 Message: lib/ring: Defining dependency "ring" 00:03:02.650 Message: lib/rcu: Defining dependency "rcu" 00:03:02.650 Message: lib/mempool: Defining dependency "mempool" 00:03:02.650 Message: lib/mbuf: Defining dependency "mbuf" 00:03:02.650 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:02.650 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:02.650 Compiler for C supports arguments -mpclmul: YES 00:03:02.650 Compiler for C supports arguments -maes: YES 00:03:02.650 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:02.650 Compiler for C supports arguments -mavx512bw: YES 00:03:02.650 Compiler for C supports arguments -mavx512dq: YES 00:03:02.650 Compiler for C supports arguments -mavx512vl: YES 00:03:02.650 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:02.650 Compiler for C supports arguments -mavx2: YES 00:03:02.650 Compiler for C supports arguments -mavx: YES 00:03:02.650 Message: lib/net: Defining dependency "net" 00:03:02.650 Message: lib/meter: Defining dependency "meter" 00:03:02.650 Message: lib/ethdev: Defining dependency "ethdev" 00:03:02.650 Message: lib/pci: Defining dependency "pci" 00:03:02.650 Message: lib/cmdline: Defining dependency "cmdline" 00:03:02.650 Message: lib/hash: Defining dependency "hash" 00:03:02.650 Message: lib/timer: Defining dependency "timer" 00:03:02.650 Message: lib/compressdev: Defining dependency "compressdev" 00:03:02.650 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:02.650 Message: lib/dmadev: Defining dependency "dmadev" 00:03:02.650 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:02.650 Message: lib/power: Defining dependency "power" 00:03:02.650 Message: lib/reorder: Defining dependency "reorder" 00:03:02.650 Message: lib/security: Defining dependency "security" 00:03:02.650 Has header "linux/userfaultfd.h" : YES 00:03:02.650 Has header "linux/vduse.h" : YES 00:03:02.650 Message: lib/vhost: Defining dependency "vhost" 00:03:02.650 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:02.650 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:02.651 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:02.651 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:02.651 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:02.651 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:02.651 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:02.651 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:02.651 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:02.651 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:02.651 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:02.651 Configuring doxy-api-html.conf using configuration 00:03:02.651 Configuring doxy-api-man.conf using configuration 00:03:02.651 Program mandb found: YES (/usr/bin/mandb) 00:03:02.651 Program sphinx-build found: NO 00:03:02.651 Configuring rte_build_config.h using configuration 00:03:02.651 Message: 00:03:02.651 ================= 00:03:02.651 Applications Enabled 00:03:02.651 ================= 00:03:02.651 00:03:02.651 apps: 00:03:02.651 00:03:02.651 00:03:02.651 Message: 00:03:02.651 ================= 00:03:02.651 Libraries Enabled 00:03:02.651 ================= 00:03:02.651 00:03:02.651 libs: 00:03:02.651 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:02.651 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:02.651 cryptodev, dmadev, power, reorder, security, vhost, 00:03:02.651 00:03:02.651 Message: 00:03:02.651 =============== 00:03:02.651 Drivers Enabled 00:03:02.651 =============== 00:03:02.651 00:03:02.651 common: 00:03:02.651 00:03:02.651 bus: 00:03:02.651 pci, vdev, 00:03:02.651 mempool: 00:03:02.651 ring, 00:03:02.651 dma: 00:03:02.651 00:03:02.651 net: 00:03:02.651 00:03:02.651 crypto: 00:03:02.651 00:03:02.651 compress: 00:03:02.651 00:03:02.651 vdpa: 00:03:02.651 00:03:02.651 00:03:02.651 Message: 00:03:02.651 ================= 00:03:02.651 Content Skipped 00:03:02.651 ================= 00:03:02.651 00:03:02.651 apps: 00:03:02.651 dumpcap: explicitly disabled via build config 00:03:02.651 graph: explicitly disabled via build config 00:03:02.651 pdump: explicitly disabled via build config 00:03:02.651 proc-info: explicitly disabled via build config 00:03:02.651 test-acl: explicitly disabled via build config 00:03:02.651 test-bbdev: explicitly disabled via build config 00:03:02.651 test-cmdline: explicitly disabled via build config 00:03:02.651 test-compress-perf: explicitly disabled via build config 00:03:02.651 test-crypto-perf: explicitly disabled via build config 00:03:02.651 test-dma-perf: explicitly disabled via build config 00:03:02.651 test-eventdev: explicitly disabled via build config 00:03:02.651 test-fib: explicitly disabled via build config 00:03:02.651 test-flow-perf: explicitly disabled via build config 00:03:02.651 test-gpudev: explicitly disabled via build config 00:03:02.651 test-mldev: explicitly disabled via build config 00:03:02.651 test-pipeline: explicitly disabled via build config 00:03:02.651 test-pmd: explicitly disabled via build config 00:03:02.651 test-regex: explicitly disabled via build config 00:03:02.651 test-sad: explicitly disabled via build config 00:03:02.651 test-security-perf: explicitly disabled via build config 00:03:02.651 00:03:02.651 libs: 00:03:02.651 argparse: explicitly disabled via build config 00:03:02.651 metrics: explicitly disabled via build config 00:03:02.651 acl: explicitly disabled via build config 00:03:02.651 bbdev: explicitly disabled via build config 00:03:02.651 bitratestats: explicitly disabled via build config 00:03:02.651 bpf: explicitly disabled via build config 00:03:02.651 cfgfile: explicitly disabled via build config 00:03:02.651 distributor: explicitly disabled via build config 00:03:02.651 efd: explicitly disabled via build config 00:03:02.651 eventdev: explicitly disabled via build config 00:03:02.651 dispatcher: explicitly disabled via build config 00:03:02.651 gpudev: explicitly disabled via build config 00:03:02.651 gro: explicitly disabled via build config 00:03:02.651 gso: explicitly disabled via build config 00:03:02.651 ip_frag: explicitly disabled via build config 00:03:02.651 jobstats: explicitly disabled via build config 00:03:02.651 latencystats: explicitly disabled via build config 00:03:02.651 lpm: explicitly disabled via build config 00:03:02.651 member: explicitly disabled via build config 00:03:02.651 pcapng: explicitly disabled via build config 00:03:02.651 rawdev: explicitly disabled via build config 00:03:02.651 regexdev: explicitly disabled via build config 00:03:02.651 mldev: explicitly disabled via build config 00:03:02.651 rib: explicitly disabled via build config 00:03:02.651 sched: explicitly disabled via build config 00:03:02.651 stack: explicitly disabled via build config 00:03:02.651 ipsec: explicitly disabled via build config 00:03:02.651 pdcp: explicitly disabled via build config 00:03:02.651 fib: explicitly disabled via build config 00:03:02.651 port: explicitly disabled via build config 00:03:02.651 pdump: explicitly disabled via build config 00:03:02.651 table: explicitly disabled via build config 00:03:02.651 pipeline: explicitly disabled via build config 00:03:02.651 graph: explicitly disabled via build config 00:03:02.651 node: explicitly disabled via build config 00:03:02.651 00:03:02.651 drivers: 00:03:02.651 common/cpt: not in enabled drivers build config 00:03:02.651 common/dpaax: not in enabled drivers build config 00:03:02.651 common/iavf: not in enabled drivers build config 00:03:02.651 common/idpf: not in enabled drivers build config 00:03:02.651 common/ionic: not in enabled drivers build config 00:03:02.651 common/mvep: not in enabled drivers build config 00:03:02.651 common/octeontx: not in enabled drivers build config 00:03:02.651 bus/auxiliary: not in enabled drivers build config 00:03:02.651 bus/cdx: not in enabled drivers build config 00:03:02.651 bus/dpaa: not in enabled drivers build config 00:03:02.651 bus/fslmc: not in enabled drivers build config 00:03:02.651 bus/ifpga: not in enabled drivers build config 00:03:02.651 bus/platform: not in enabled drivers build config 00:03:02.651 bus/uacce: not in enabled drivers build config 00:03:02.651 bus/vmbus: not in enabled drivers build config 00:03:02.651 common/cnxk: not in enabled drivers build config 00:03:02.651 common/mlx5: not in enabled drivers build config 00:03:02.651 common/nfp: not in enabled drivers build config 00:03:02.651 common/nitrox: not in enabled drivers build config 00:03:02.651 common/qat: not in enabled drivers build config 00:03:02.651 common/sfc_efx: not in enabled drivers build config 00:03:02.651 mempool/bucket: not in enabled drivers build config 00:03:02.651 mempool/cnxk: not in enabled drivers build config 00:03:02.651 mempool/dpaa: not in enabled drivers build config 00:03:02.651 mempool/dpaa2: not in enabled drivers build config 00:03:02.651 mempool/octeontx: not in enabled drivers build config 00:03:02.651 mempool/stack: not in enabled drivers build config 00:03:02.651 dma/cnxk: not in enabled drivers build config 00:03:02.651 dma/dpaa: not in enabled drivers build config 00:03:02.651 dma/dpaa2: not in enabled drivers build config 00:03:02.651 dma/hisilicon: not in enabled drivers build config 00:03:02.651 dma/idxd: not in enabled drivers build config 00:03:02.651 dma/ioat: not in enabled drivers build config 00:03:02.651 dma/skeleton: not in enabled drivers build config 00:03:02.651 net/af_packet: not in enabled drivers build config 00:03:02.651 net/af_xdp: not in enabled drivers build config 00:03:02.651 net/ark: not in enabled drivers build config 00:03:02.651 net/atlantic: not in enabled drivers build config 00:03:02.651 net/avp: not in enabled drivers build config 00:03:02.651 net/axgbe: not in enabled drivers build config 00:03:02.651 net/bnx2x: not in enabled drivers build config 00:03:02.651 net/bnxt: not in enabled drivers build config 00:03:02.651 net/bonding: not in enabled drivers build config 00:03:02.651 net/cnxk: not in enabled drivers build config 00:03:02.651 net/cpfl: not in enabled drivers build config 00:03:02.651 net/cxgbe: not in enabled drivers build config 00:03:02.651 net/dpaa: not in enabled drivers build config 00:03:02.651 net/dpaa2: not in enabled drivers build config 00:03:02.651 net/e1000: not in enabled drivers build config 00:03:02.651 net/ena: not in enabled drivers build config 00:03:02.651 net/enetc: not in enabled drivers build config 00:03:02.651 net/enetfec: not in enabled drivers build config 00:03:02.651 net/enic: not in enabled drivers build config 00:03:02.651 net/failsafe: not in enabled drivers build config 00:03:02.651 net/fm10k: not in enabled drivers build config 00:03:02.651 net/gve: not in enabled drivers build config 00:03:02.651 net/hinic: not in enabled drivers build config 00:03:02.651 net/hns3: not in enabled drivers build config 00:03:02.651 net/i40e: not in enabled drivers build config 00:03:02.651 net/iavf: not in enabled drivers build config 00:03:02.651 net/ice: not in enabled drivers build config 00:03:02.651 net/idpf: not in enabled drivers build config 00:03:02.651 net/igc: not in enabled drivers build config 00:03:02.651 net/ionic: not in enabled drivers build config 00:03:02.651 net/ipn3ke: not in enabled drivers build config 00:03:02.651 net/ixgbe: not in enabled drivers build config 00:03:02.651 net/mana: not in enabled drivers build config 00:03:02.651 net/memif: not in enabled drivers build config 00:03:02.651 net/mlx4: not in enabled drivers build config 00:03:02.651 net/mlx5: not in enabled drivers build config 00:03:02.651 net/mvneta: not in enabled drivers build config 00:03:02.651 net/mvpp2: not in enabled drivers build config 00:03:02.651 net/netvsc: not in enabled drivers build config 00:03:02.651 net/nfb: not in enabled drivers build config 00:03:02.651 net/nfp: not in enabled drivers build config 00:03:02.651 net/ngbe: not in enabled drivers build config 00:03:02.651 net/null: not in enabled drivers build config 00:03:02.651 net/octeontx: not in enabled drivers build config 00:03:02.651 net/octeon_ep: not in enabled drivers build config 00:03:02.651 net/pcap: not in enabled drivers build config 00:03:02.651 net/pfe: not in enabled drivers build config 00:03:02.651 net/qede: not in enabled drivers build config 00:03:02.651 net/ring: not in enabled drivers build config 00:03:02.652 net/sfc: not in enabled drivers build config 00:03:02.652 net/softnic: not in enabled drivers build config 00:03:02.652 net/tap: not in enabled drivers build config 00:03:02.652 net/thunderx: not in enabled drivers build config 00:03:02.652 net/txgbe: not in enabled drivers build config 00:03:02.652 net/vdev_netvsc: not in enabled drivers build config 00:03:02.652 net/vhost: not in enabled drivers build config 00:03:02.652 net/virtio: not in enabled drivers build config 00:03:02.652 net/vmxnet3: not in enabled drivers build config 00:03:02.652 raw/*: missing internal dependency, "rawdev" 00:03:02.652 crypto/armv8: not in enabled drivers build config 00:03:02.652 crypto/bcmfs: not in enabled drivers build config 00:03:02.652 crypto/caam_jr: not in enabled drivers build config 00:03:02.652 crypto/ccp: not in enabled drivers build config 00:03:02.652 crypto/cnxk: not in enabled drivers build config 00:03:02.652 crypto/dpaa_sec: not in enabled drivers build config 00:03:02.652 crypto/dpaa2_sec: not in enabled drivers build config 00:03:02.652 crypto/ipsec_mb: not in enabled drivers build config 00:03:02.652 crypto/mlx5: not in enabled drivers build config 00:03:02.652 crypto/mvsam: not in enabled drivers build config 00:03:02.652 crypto/nitrox: not in enabled drivers build config 00:03:02.652 crypto/null: not in enabled drivers build config 00:03:02.652 crypto/octeontx: not in enabled drivers build config 00:03:02.652 crypto/openssl: not in enabled drivers build config 00:03:02.652 crypto/scheduler: not in enabled drivers build config 00:03:02.652 crypto/uadk: not in enabled drivers build config 00:03:02.652 crypto/virtio: not in enabled drivers build config 00:03:02.652 compress/isal: not in enabled drivers build config 00:03:02.652 compress/mlx5: not in enabled drivers build config 00:03:02.652 compress/nitrox: not in enabled drivers build config 00:03:02.652 compress/octeontx: not in enabled drivers build config 00:03:02.652 compress/zlib: not in enabled drivers build config 00:03:02.652 regex/*: missing internal dependency, "regexdev" 00:03:02.652 ml/*: missing internal dependency, "mldev" 00:03:02.652 vdpa/ifc: not in enabled drivers build config 00:03:02.652 vdpa/mlx5: not in enabled drivers build config 00:03:02.652 vdpa/nfp: not in enabled drivers build config 00:03:02.652 vdpa/sfc: not in enabled drivers build config 00:03:02.652 event/*: missing internal dependency, "eventdev" 00:03:02.652 baseband/*: missing internal dependency, "bbdev" 00:03:02.652 gpu/*: missing internal dependency, "gpudev" 00:03:02.652 00:03:02.652 00:03:02.652 Build targets in project: 85 00:03:02.652 00:03:02.652 DPDK 24.03.0 00:03:02.652 00:03:02.652 User defined options 00:03:02.652 buildtype : debug 00:03:02.652 default_library : shared 00:03:02.652 libdir : lib 00:03:02.652 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:02.652 b_sanitize : address 00:03:02.652 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:02.652 c_link_args : 00:03:02.652 cpu_instruction_set: native 00:03:02.652 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:02.652 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:02.652 enable_docs : false 00:03:02.652 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:02.652 enable_kmods : false 00:03:02.652 max_lcores : 128 00:03:02.652 tests : false 00:03:02.652 00:03:02.652 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:02.910 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:02.910 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:03.169 [2/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:03.169 [3/268] Linking static target lib/librte_log.a 00:03:03.169 [4/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:03.169 [5/268] Linking static target lib/librte_kvargs.a 00:03:03.169 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:03.427 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:03.685 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:03.685 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:03.942 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:03.942 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:03.942 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:03.942 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:03.942 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:03.942 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:04.199 [16/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:04.199 [17/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:04.199 [18/268] Linking static target lib/librte_telemetry.a 00:03:04.199 [19/268] Linking target lib/librte_log.so.24.1 00:03:04.199 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:04.457 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:04.457 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:04.716 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:04.716 [24/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:04.716 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:04.716 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:04.974 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:04.974 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:04.974 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:04.974 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:04.974 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:04.974 [32/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.281 [33/268] Linking target lib/librte_telemetry.so.24.1 00:03:05.281 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:05.281 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:05.539 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:05.539 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:05.797 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:05.797 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:05.797 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:05.797 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:05.797 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:05.797 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:05.797 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:06.056 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:06.314 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:06.314 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:06.314 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:06.314 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:06.573 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:06.573 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:06.832 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:06.832 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:07.090 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:07.090 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:07.090 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:07.090 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:07.349 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:07.349 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:07.349 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:07.608 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:07.608 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:07.608 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:07.866 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:07.866 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:07.866 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:07.866 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:08.124 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:08.383 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:08.383 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:08.641 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:08.641 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:08.641 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:08.641 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:08.641 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:08.641 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:08.899 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:08.899 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:08.899 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:08.899 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:09.157 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:09.157 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:09.157 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:09.157 [84/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:09.415 [85/268] Linking static target lib/librte_eal.a 00:03:09.415 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:09.415 [87/268] Linking static target lib/librte_ring.a 00:03:09.673 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:09.673 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:09.673 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:09.931 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:09.931 [92/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:09.931 [93/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:09.931 [94/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:09.931 [95/268] Linking static target lib/librte_mempool.a 00:03:09.931 [96/268] Linking static target lib/librte_rcu.a 00:03:09.931 [97/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.190 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:10.448 [99/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:10.448 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:10.707 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:10.707 [102/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:10.707 [103/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:10.707 [104/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:10.707 [105/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:10.964 [106/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:10.964 [107/268] Linking static target lib/librte_net.a 00:03:10.964 [108/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:10.964 [109/268] Linking static target lib/librte_mbuf.a 00:03:11.222 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:11.222 [111/268] Linking static target lib/librte_meter.a 00:03:11.222 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.222 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:11.222 [114/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:11.222 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:11.480 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:11.480 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:11.480 [118/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.046 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:12.046 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:12.046 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:12.305 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:12.563 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:12.563 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:12.563 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:12.563 [126/268] Linking static target lib/librte_pci.a 00:03:12.822 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:12.822 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:13.080 [129/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:13.080 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:13.080 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:13.080 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:13.080 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:13.080 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:13.080 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:13.339 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:13.339 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:13.339 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:13.339 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:13.339 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:13.339 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:13.339 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:13.339 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:13.339 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:13.597 [145/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:13.856 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:13.856 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:13.856 [148/268] Linking static target lib/librte_cmdline.a 00:03:14.114 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:14.114 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:14.373 [151/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:14.373 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:14.373 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:14.631 [154/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:14.631 [155/268] Linking static target lib/librte_timer.a 00:03:14.890 [156/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:14.890 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:14.890 [158/268] Linking static target lib/librte_hash.a 00:03:14.890 [159/268] Linking static target lib/librte_compressdev.a 00:03:14.890 [160/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:15.148 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:15.148 [162/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:15.406 [163/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:15.406 [164/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.406 [165/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:15.664 [166/268] Linking static target lib/librte_ethdev.a 00:03:15.664 [167/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.664 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:15.665 [169/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:15.665 [170/268] Linking static target lib/librte_dmadev.a 00:03:15.665 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:15.923 [172/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:15.923 [173/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:15.923 [174/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:16.181 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.439 [176/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:16.439 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:16.439 [178/268] Linking static target lib/librte_cryptodev.a 00:03:16.439 [179/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:16.439 [180/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:16.698 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:16.698 [182/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:16.698 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:16.956 [184/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:17.214 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:17.214 [186/268] Linking static target lib/librte_power.a 00:03:17.214 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:17.214 [188/268] Linking static target lib/librte_reorder.a 00:03:17.472 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:17.472 [190/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:17.472 [191/268] Linking static target lib/librte_security.a 00:03:17.472 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:17.730 [193/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:17.988 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.247 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:18.247 [196/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.507 [197/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.507 [198/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:18.766 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:18.766 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:19.025 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:19.025 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:19.025 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:19.284 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:19.284 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:19.543 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:19.543 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:19.802 [208/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:19.802 [209/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:19.802 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:19.802 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:20.061 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:20.061 [213/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:20.061 [214/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.061 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:20.061 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:20.061 [217/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.061 [218/268] Linking static target drivers/librte_bus_pci.a 00:03:20.061 [219/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:20.061 [220/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:20.061 [221/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:20.320 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:20.320 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.320 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:20.320 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:20.579 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.856 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.427 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:21.427 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.427 [230/268] Linking target lib/librte_eal.so.24.1 00:03:21.684 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:21.685 [232/268] Linking target lib/librte_dmadev.so.24.1 00:03:21.685 [233/268] Linking target lib/librte_pci.so.24.1 00:03:21.685 [234/268] Linking target lib/librte_ring.so.24.1 00:03:21.685 [235/268] Linking target lib/librte_timer.so.24.1 00:03:21.685 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:21.685 [237/268] Linking target lib/librte_meter.so.24.1 00:03:21.943 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:21.943 [239/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:21.943 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:21.943 [241/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:21.943 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:21.943 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:21.943 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:21.943 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:21.943 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:21.943 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:22.202 [248/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:22.202 [249/268] Linking target lib/librte_mbuf.so.24.1 00:03:22.202 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:22.202 [251/268] Linking target lib/librte_compressdev.so.24.1 00:03:22.202 [252/268] Linking target lib/librte_net.so.24.1 00:03:22.202 [253/268] Linking target lib/librte_reorder.so.24.1 00:03:22.202 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:22.461 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:22.461 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:22.461 [257/268] Linking target lib/librte_hash.so.24.1 00:03:22.461 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:22.461 [259/268] Linking target lib/librte_security.so.24.1 00:03:22.721 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:23.290 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.290 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:23.549 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:23.549 [264/268] Linking target lib/librte_power.so.24.1 00:03:25.453 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:25.453 [266/268] Linking static target lib/librte_vhost.a 00:03:26.832 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:27.091 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:27.091 INFO: autodetecting backend as ninja 00:03:27.091 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:49.032 CC lib/ut_mock/mock.o 00:03:49.032 CC lib/log/log.o 00:03:49.032 CC lib/log/log_flags.o 00:03:49.032 CC lib/log/log_deprecated.o 00:03:49.032 CC lib/ut/ut.o 00:03:49.033 LIB libspdk_ut_mock.a 00:03:49.033 LIB libspdk_ut.a 00:03:49.033 SO libspdk_ut_mock.so.6.0 00:03:49.033 SO libspdk_ut.so.2.0 00:03:49.033 LIB libspdk_log.a 00:03:49.033 SO libspdk_log.so.7.1 00:03:49.033 SYMLINK libspdk_ut_mock.so 00:03:49.033 SYMLINK libspdk_ut.so 00:03:49.033 SYMLINK libspdk_log.so 00:03:49.033 CC lib/dma/dma.o 00:03:49.033 CC lib/util/base64.o 00:03:49.033 CXX lib/trace_parser/trace.o 00:03:49.033 CC lib/util/bit_array.o 00:03:49.033 CC lib/util/cpuset.o 00:03:49.033 CC lib/util/crc16.o 00:03:49.033 CC lib/ioat/ioat.o 00:03:49.033 CC lib/util/crc32.o 00:03:49.033 CC lib/util/crc32c.o 00:03:49.033 CC lib/vfio_user/host/vfio_user_pci.o 00:03:49.033 CC lib/util/crc32_ieee.o 00:03:49.033 CC lib/util/crc64.o 00:03:49.033 CC lib/util/dif.o 00:03:49.033 CC lib/util/fd.o 00:03:49.033 LIB libspdk_dma.a 00:03:49.033 CC lib/util/fd_group.o 00:03:49.033 SO libspdk_dma.so.5.0 00:03:49.033 CC lib/util/file.o 00:03:49.033 CC lib/util/hexlify.o 00:03:49.033 CC lib/util/iov.o 00:03:49.033 SYMLINK libspdk_dma.so 00:03:49.033 CC lib/util/math.o 00:03:49.033 LIB libspdk_ioat.a 00:03:49.033 CC lib/vfio_user/host/vfio_user.o 00:03:49.033 SO libspdk_ioat.so.7.0 00:03:49.033 CC lib/util/net.o 00:03:49.033 CC lib/util/pipe.o 00:03:49.033 SYMLINK libspdk_ioat.so 00:03:49.033 CC lib/util/strerror_tls.o 00:03:49.033 CC lib/util/string.o 00:03:49.033 CC lib/util/uuid.o 00:03:49.033 CC lib/util/xor.o 00:03:49.033 CC lib/util/zipf.o 00:03:49.033 CC lib/util/md5.o 00:03:49.033 LIB libspdk_vfio_user.a 00:03:49.033 SO libspdk_vfio_user.so.5.0 00:03:49.033 SYMLINK libspdk_vfio_user.so 00:03:49.033 LIB libspdk_util.a 00:03:49.033 SO libspdk_util.so.10.1 00:03:49.033 LIB libspdk_trace_parser.a 00:03:49.033 SYMLINK libspdk_util.so 00:03:49.033 SO libspdk_trace_parser.so.6.0 00:03:49.033 SYMLINK libspdk_trace_parser.so 00:03:49.033 CC lib/conf/conf.o 00:03:49.033 CC lib/rdma_utils/rdma_utils.o 00:03:49.033 CC lib/vmd/vmd.o 00:03:49.033 CC lib/vmd/led.o 00:03:49.033 CC lib/env_dpdk/env.o 00:03:49.033 CC lib/env_dpdk/memory.o 00:03:49.033 CC lib/env_dpdk/pci.o 00:03:49.033 CC lib/env_dpdk/init.o 00:03:49.033 CC lib/idxd/idxd.o 00:03:49.033 CC lib/json/json_parse.o 00:03:49.033 CC lib/idxd/idxd_user.o 00:03:49.033 LIB libspdk_conf.a 00:03:49.033 SO libspdk_conf.so.6.0 00:03:49.033 CC lib/json/json_util.o 00:03:49.033 SYMLINK libspdk_conf.so 00:03:49.033 CC lib/json/json_write.o 00:03:49.033 LIB libspdk_rdma_utils.a 00:03:49.033 SO libspdk_rdma_utils.so.1.0 00:03:49.033 CC lib/env_dpdk/threads.o 00:03:49.033 CC lib/env_dpdk/pci_ioat.o 00:03:49.033 SYMLINK libspdk_rdma_utils.so 00:03:49.033 CC lib/env_dpdk/pci_virtio.o 00:03:49.033 CC lib/env_dpdk/pci_vmd.o 00:03:49.033 CC lib/idxd/idxd_kernel.o 00:03:49.033 CC lib/env_dpdk/pci_idxd.o 00:03:49.033 CC lib/env_dpdk/pci_event.o 00:03:49.033 CC lib/env_dpdk/sigbus_handler.o 00:03:49.033 LIB libspdk_json.a 00:03:49.033 SO libspdk_json.so.6.0 00:03:49.033 CC lib/rdma_provider/common.o 00:03:49.033 CC lib/env_dpdk/pci_dpdk.o 00:03:49.033 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:49.033 SYMLINK libspdk_json.so 00:03:49.033 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:49.033 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:49.033 LIB libspdk_idxd.a 00:03:49.033 LIB libspdk_vmd.a 00:03:49.033 SO libspdk_idxd.so.12.1 00:03:49.033 SO libspdk_vmd.so.6.0 00:03:49.033 SYMLINK libspdk_idxd.so 00:03:49.033 SYMLINK libspdk_vmd.so 00:03:49.033 CC lib/jsonrpc/jsonrpc_server.o 00:03:49.033 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:49.033 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:49.033 CC lib/jsonrpc/jsonrpc_client.o 00:03:49.033 LIB libspdk_rdma_provider.a 00:03:49.033 SO libspdk_rdma_provider.so.7.0 00:03:49.292 SYMLINK libspdk_rdma_provider.so 00:03:49.292 LIB libspdk_jsonrpc.a 00:03:49.292 SO libspdk_jsonrpc.so.6.0 00:03:49.551 SYMLINK libspdk_jsonrpc.so 00:03:49.810 CC lib/rpc/rpc.o 00:03:49.810 LIB libspdk_env_dpdk.a 00:03:49.810 LIB libspdk_rpc.a 00:03:50.069 SO libspdk_rpc.so.6.0 00:03:50.069 SO libspdk_env_dpdk.so.15.1 00:03:50.069 SYMLINK libspdk_rpc.so 00:03:50.069 SYMLINK libspdk_env_dpdk.so 00:03:50.328 CC lib/trace/trace.o 00:03:50.328 CC lib/trace/trace_rpc.o 00:03:50.328 CC lib/trace/trace_flags.o 00:03:50.328 CC lib/keyring/keyring_rpc.o 00:03:50.328 CC lib/keyring/keyring.o 00:03:50.328 CC lib/notify/notify.o 00:03:50.328 CC lib/notify/notify_rpc.o 00:03:50.328 LIB libspdk_notify.a 00:03:50.587 SO libspdk_notify.so.6.0 00:03:50.587 SYMLINK libspdk_notify.so 00:03:50.587 LIB libspdk_keyring.a 00:03:50.587 LIB libspdk_trace.a 00:03:50.587 SO libspdk_keyring.so.2.0 00:03:50.587 SO libspdk_trace.so.11.0 00:03:50.587 SYMLINK libspdk_keyring.so 00:03:50.846 SYMLINK libspdk_trace.so 00:03:51.105 CC lib/sock/sock.o 00:03:51.105 CC lib/sock/sock_rpc.o 00:03:51.105 CC lib/thread/thread.o 00:03:51.105 CC lib/thread/iobuf.o 00:03:51.673 LIB libspdk_sock.a 00:03:51.673 SO libspdk_sock.so.10.0 00:03:51.673 SYMLINK libspdk_sock.so 00:03:51.932 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:51.932 CC lib/nvme/nvme_ctrlr.o 00:03:51.932 CC lib/nvme/nvme_ns_cmd.o 00:03:51.932 CC lib/nvme/nvme_fabric.o 00:03:51.932 CC lib/nvme/nvme_ns.o 00:03:51.932 CC lib/nvme/nvme_pcie_common.o 00:03:51.932 CC lib/nvme/nvme_pcie.o 00:03:51.932 CC lib/nvme/nvme_qpair.o 00:03:51.932 CC lib/nvme/nvme.o 00:03:52.906 CC lib/nvme/nvme_quirks.o 00:03:52.906 CC lib/nvme/nvme_transport.o 00:03:52.906 CC lib/nvme/nvme_discovery.o 00:03:52.906 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:52.906 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:53.179 LIB libspdk_thread.a 00:03:53.180 SO libspdk_thread.so.11.0 00:03:53.180 CC lib/nvme/nvme_tcp.o 00:03:53.180 CC lib/nvme/nvme_opal.o 00:03:53.180 SYMLINK libspdk_thread.so 00:03:53.180 CC lib/nvme/nvme_io_msg.o 00:03:53.449 CC lib/nvme/nvme_poll_group.o 00:03:53.449 CC lib/nvme/nvme_zns.o 00:03:53.449 CC lib/nvme/nvme_stubs.o 00:03:53.708 CC lib/nvme/nvme_auth.o 00:03:53.708 CC lib/nvme/nvme_cuse.o 00:03:53.709 CC lib/nvme/nvme_vfio_user.o 00:03:53.709 CC lib/accel/accel.o 00:03:53.967 CC lib/accel/accel_rpc.o 00:03:53.967 CC lib/nvme/nvme_rdma.o 00:03:53.967 CC lib/accel/accel_sw.o 00:03:54.226 CC lib/blob/blobstore.o 00:03:54.226 CC lib/init/json_config.o 00:03:54.485 CC lib/init/subsystem.o 00:03:54.485 CC lib/init/subsystem_rpc.o 00:03:54.485 CC lib/init/rpc.o 00:03:54.744 CC lib/blob/request.o 00:03:54.744 CC lib/blob/zeroes.o 00:03:54.744 CC lib/virtio/virtio.o 00:03:54.744 LIB libspdk_init.a 00:03:54.744 CC lib/blob/blob_bs_dev.o 00:03:54.744 SO libspdk_init.so.6.0 00:03:54.744 CC lib/virtio/virtio_vhost_user.o 00:03:55.003 CC lib/virtio/virtio_vfio_user.o 00:03:55.003 SYMLINK libspdk_init.so 00:03:55.003 CC lib/vfu_tgt/tgt_endpoint.o 00:03:55.003 CC lib/virtio/virtio_pci.o 00:03:55.262 CC lib/vfu_tgt/tgt_rpc.o 00:03:55.262 CC lib/fsdev/fsdev.o 00:03:55.262 CC lib/fsdev/fsdev_io.o 00:03:55.262 CC lib/event/app.o 00:03:55.262 LIB libspdk_accel.a 00:03:55.262 CC lib/event/reactor.o 00:03:55.262 SO libspdk_accel.so.16.0 00:03:55.262 CC lib/event/log_rpc.o 00:03:55.521 SYMLINK libspdk_accel.so 00:03:55.521 CC lib/fsdev/fsdev_rpc.o 00:03:55.521 LIB libspdk_vfu_tgt.a 00:03:55.521 LIB libspdk_virtio.a 00:03:55.521 SO libspdk_vfu_tgt.so.3.0 00:03:55.521 SO libspdk_virtio.so.7.0 00:03:55.521 SYMLINK libspdk_vfu_tgt.so 00:03:55.521 CC lib/event/app_rpc.o 00:03:55.521 CC lib/event/scheduler_static.o 00:03:55.521 SYMLINK libspdk_virtio.so 00:03:55.780 LIB libspdk_nvme.a 00:03:55.780 CC lib/bdev/bdev.o 00:03:55.780 CC lib/bdev/bdev_zone.o 00:03:55.780 CC lib/bdev/bdev_rpc.o 00:03:55.780 CC lib/bdev/part.o 00:03:55.780 CC lib/bdev/scsi_nvme.o 00:03:56.039 LIB libspdk_event.a 00:03:56.039 LIB libspdk_fsdev.a 00:03:56.039 SO libspdk_event.so.14.0 00:03:56.039 SO libspdk_nvme.so.15.0 00:03:56.039 SO libspdk_fsdev.so.2.0 00:03:56.039 SYMLINK libspdk_event.so 00:03:56.040 SYMLINK libspdk_fsdev.so 00:03:56.299 SYMLINK libspdk_nvme.so 00:03:56.299 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:57.235 LIB libspdk_fuse_dispatcher.a 00:03:57.235 SO libspdk_fuse_dispatcher.so.1.0 00:03:57.235 SYMLINK libspdk_fuse_dispatcher.so 00:03:58.613 LIB libspdk_blob.a 00:03:58.613 SO libspdk_blob.so.11.0 00:03:58.613 SYMLINK libspdk_blob.so 00:03:58.870 CC lib/lvol/lvol.o 00:03:58.870 CC lib/blobfs/blobfs.o 00:03:58.870 CC lib/blobfs/tree.o 00:03:59.128 LIB libspdk_bdev.a 00:03:59.128 SO libspdk_bdev.so.17.0 00:03:59.128 SYMLINK libspdk_bdev.so 00:03:59.385 CC lib/nvmf/ctrlr.o 00:03:59.385 CC lib/nvmf/ctrlr_discovery.o 00:03:59.386 CC lib/nvmf/ctrlr_bdev.o 00:03:59.386 CC lib/nvmf/subsystem.o 00:03:59.386 CC lib/ublk/ublk.o 00:03:59.386 CC lib/scsi/dev.o 00:03:59.386 CC lib/ftl/ftl_core.o 00:03:59.386 CC lib/nbd/nbd.o 00:03:59.644 CC lib/scsi/lun.o 00:03:59.901 CC lib/nbd/nbd_rpc.o 00:03:59.901 CC lib/ftl/ftl_init.o 00:03:59.901 LIB libspdk_blobfs.a 00:04:00.159 SO libspdk_blobfs.so.10.0 00:04:00.159 CC lib/ftl/ftl_layout.o 00:04:00.159 LIB libspdk_lvol.a 00:04:00.159 SO libspdk_lvol.so.10.0 00:04:00.159 SYMLINK libspdk_blobfs.so 00:04:00.159 CC lib/scsi/port.o 00:04:00.159 CC lib/scsi/scsi.o 00:04:00.159 LIB libspdk_nbd.a 00:04:00.159 SYMLINK libspdk_lvol.so 00:04:00.159 CC lib/ublk/ublk_rpc.o 00:04:00.159 CC lib/nvmf/nvmf.o 00:04:00.159 SO libspdk_nbd.so.7.0 00:04:00.159 SYMLINK libspdk_nbd.so 00:04:00.159 CC lib/nvmf/nvmf_rpc.o 00:04:00.159 CC lib/nvmf/transport.o 00:04:00.417 CC lib/nvmf/tcp.o 00:04:00.417 CC lib/scsi/scsi_bdev.o 00:04:00.417 CC lib/nvmf/stubs.o 00:04:00.417 LIB libspdk_ublk.a 00:04:00.417 SO libspdk_ublk.so.3.0 00:04:00.417 CC lib/ftl/ftl_debug.o 00:04:00.417 SYMLINK libspdk_ublk.so 00:04:00.417 CC lib/ftl/ftl_io.o 00:04:00.676 CC lib/nvmf/mdns_server.o 00:04:00.676 CC lib/ftl/ftl_sb.o 00:04:00.937 CC lib/nvmf/vfio_user.o 00:04:00.937 CC lib/scsi/scsi_pr.o 00:04:00.937 CC lib/ftl/ftl_l2p.o 00:04:01.201 CC lib/nvmf/rdma.o 00:04:01.201 CC lib/nvmf/auth.o 00:04:01.201 CC lib/ftl/ftl_l2p_flat.o 00:04:01.201 CC lib/scsi/scsi_rpc.o 00:04:01.201 CC lib/scsi/task.o 00:04:01.460 CC lib/ftl/ftl_nv_cache.o 00:04:01.460 CC lib/ftl/ftl_band.o 00:04:01.460 CC lib/ftl/ftl_band_ops.o 00:04:01.460 CC lib/ftl/ftl_writer.o 00:04:01.460 LIB libspdk_scsi.a 00:04:01.719 SO libspdk_scsi.so.9.0 00:04:01.719 SYMLINK libspdk_scsi.so 00:04:01.719 CC lib/ftl/ftl_rq.o 00:04:01.719 CC lib/ftl/ftl_reloc.o 00:04:01.719 CC lib/ftl/ftl_l2p_cache.o 00:04:01.719 CC lib/ftl/ftl_p2l.o 00:04:01.977 CC lib/ftl/ftl_p2l_log.o 00:04:01.977 CC lib/ftl/mngt/ftl_mngt.o 00:04:02.236 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:02.236 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:02.236 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:02.498 CC lib/iscsi/conn.o 00:04:02.498 CC lib/iscsi/init_grp.o 00:04:02.498 CC lib/iscsi/iscsi.o 00:04:02.498 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:02.498 CC lib/vhost/vhost.o 00:04:02.498 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:02.498 CC lib/iscsi/param.o 00:04:02.498 CC lib/iscsi/portal_grp.o 00:04:02.758 CC lib/iscsi/tgt_node.o 00:04:02.758 CC lib/iscsi/iscsi_subsystem.o 00:04:02.758 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:03.017 CC lib/iscsi/iscsi_rpc.o 00:04:03.017 CC lib/iscsi/task.o 00:04:03.017 CC lib/vhost/vhost_rpc.o 00:04:03.276 CC lib/vhost/vhost_scsi.o 00:04:03.276 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:03.276 CC lib/vhost/vhost_blk.o 00:04:03.276 CC lib/vhost/rte_vhost_user.o 00:04:03.276 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:03.276 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:03.535 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:03.535 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:03.535 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:03.794 CC lib/ftl/utils/ftl_conf.o 00:04:03.794 CC lib/ftl/utils/ftl_md.o 00:04:03.794 CC lib/ftl/utils/ftl_mempool.o 00:04:03.794 CC lib/ftl/utils/ftl_bitmap.o 00:04:03.794 CC lib/ftl/utils/ftl_property.o 00:04:04.053 LIB libspdk_nvmf.a 00:04:04.053 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:04.053 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:04.053 SO libspdk_nvmf.so.20.0 00:04:04.053 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:04.311 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:04.311 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:04.311 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:04.311 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:04.311 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:04.311 LIB libspdk_iscsi.a 00:04:04.311 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:04.311 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:04.311 SYMLINK libspdk_nvmf.so 00:04:04.311 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:04.311 SO libspdk_iscsi.so.8.0 00:04:04.570 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:04.570 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:04.570 CC lib/ftl/base/ftl_base_dev.o 00:04:04.570 CC lib/ftl/base/ftl_base_bdev.o 00:04:04.570 LIB libspdk_vhost.a 00:04:04.570 CC lib/ftl/ftl_trace.o 00:04:04.570 SO libspdk_vhost.so.8.0 00:04:04.570 SYMLINK libspdk_iscsi.so 00:04:04.829 SYMLINK libspdk_vhost.so 00:04:04.829 LIB libspdk_ftl.a 00:04:05.088 SO libspdk_ftl.so.9.0 00:04:05.347 SYMLINK libspdk_ftl.so 00:04:05.607 CC module/vfu_device/vfu_virtio.o 00:04:05.607 CC module/env_dpdk/env_dpdk_rpc.o 00:04:05.866 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:05.866 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:05.866 CC module/fsdev/aio/fsdev_aio.o 00:04:05.866 CC module/blob/bdev/blob_bdev.o 00:04:05.866 CC module/sock/posix/posix.o 00:04:05.866 CC module/accel/error/accel_error.o 00:04:05.866 CC module/scheduler/gscheduler/gscheduler.o 00:04:05.866 CC module/keyring/file/keyring.o 00:04:05.866 LIB libspdk_env_dpdk_rpc.a 00:04:05.866 SO libspdk_env_dpdk_rpc.so.6.0 00:04:05.866 SYMLINK libspdk_env_dpdk_rpc.so 00:04:05.866 CC module/keyring/file/keyring_rpc.o 00:04:05.866 LIB libspdk_scheduler_dpdk_governor.a 00:04:05.866 LIB libspdk_scheduler_gscheduler.a 00:04:05.866 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:05.866 SO libspdk_scheduler_gscheduler.so.4.0 00:04:06.125 LIB libspdk_scheduler_dynamic.a 00:04:06.125 CC module/accel/error/accel_error_rpc.o 00:04:06.125 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:06.125 SO libspdk_scheduler_dynamic.so.4.0 00:04:06.125 SYMLINK libspdk_scheduler_gscheduler.so 00:04:06.125 LIB libspdk_keyring_file.a 00:04:06.125 SO libspdk_keyring_file.so.2.0 00:04:06.125 SYMLINK libspdk_scheduler_dynamic.so 00:04:06.125 LIB libspdk_blob_bdev.a 00:04:06.125 CC module/sock/uring/uring.o 00:04:06.125 SO libspdk_blob_bdev.so.11.0 00:04:06.125 SYMLINK libspdk_keyring_file.so 00:04:06.125 CC module/vfu_device/vfu_virtio_blk.o 00:04:06.125 LIB libspdk_accel_error.a 00:04:06.125 SYMLINK libspdk_blob_bdev.so 00:04:06.125 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:06.125 CC module/keyring/linux/keyring.o 00:04:06.125 SO libspdk_accel_error.so.2.0 00:04:06.125 CC module/accel/ioat/accel_ioat.o 00:04:06.384 CC module/accel/dsa/accel_dsa.o 00:04:06.384 SYMLINK libspdk_accel_error.so 00:04:06.384 CC module/accel/dsa/accel_dsa_rpc.o 00:04:06.384 CC module/vfu_device/vfu_virtio_scsi.o 00:04:06.384 CC module/keyring/linux/keyring_rpc.o 00:04:06.384 CC module/accel/ioat/accel_ioat_rpc.o 00:04:06.643 CC module/fsdev/aio/linux_aio_mgr.o 00:04:06.643 LIB libspdk_keyring_linux.a 00:04:06.643 SO libspdk_keyring_linux.so.1.0 00:04:06.643 LIB libspdk_accel_dsa.a 00:04:06.643 LIB libspdk_accel_ioat.a 00:04:06.643 SO libspdk_accel_dsa.so.5.0 00:04:06.643 CC module/vfu_device/vfu_virtio_rpc.o 00:04:06.643 SO libspdk_accel_ioat.so.6.0 00:04:06.643 SYMLINK libspdk_keyring_linux.so 00:04:06.643 CC module/vfu_device/vfu_virtio_fs.o 00:04:06.643 LIB libspdk_sock_posix.a 00:04:06.643 SYMLINK libspdk_accel_dsa.so 00:04:06.643 SYMLINK libspdk_accel_ioat.so 00:04:06.643 LIB libspdk_fsdev_aio.a 00:04:06.902 CC module/blobfs/bdev/blobfs_bdev.o 00:04:06.902 SO libspdk_sock_posix.so.6.0 00:04:06.902 CC module/bdev/delay/vbdev_delay.o 00:04:06.902 SO libspdk_fsdev_aio.so.1.0 00:04:06.902 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:06.902 SYMLINK libspdk_sock_posix.so 00:04:06.902 SYMLINK libspdk_fsdev_aio.so 00:04:06.902 CC module/bdev/error/vbdev_error.o 00:04:06.902 CC module/accel/iaa/accel_iaa.o 00:04:06.902 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:06.902 LIB libspdk_vfu_device.a 00:04:06.902 CC module/accel/iaa/accel_iaa_rpc.o 00:04:07.161 SO libspdk_vfu_device.so.3.0 00:04:07.161 CC module/bdev/gpt/gpt.o 00:04:07.161 CC module/bdev/lvol/vbdev_lvol.o 00:04:07.161 CC module/bdev/malloc/bdev_malloc.o 00:04:07.161 LIB libspdk_sock_uring.a 00:04:07.161 SO libspdk_sock_uring.so.5.0 00:04:07.161 SYMLINK libspdk_vfu_device.so 00:04:07.161 CC module/bdev/gpt/vbdev_gpt.o 00:04:07.161 LIB libspdk_blobfs_bdev.a 00:04:07.161 LIB libspdk_accel_iaa.a 00:04:07.161 SYMLINK libspdk_sock_uring.so 00:04:07.161 SO libspdk_blobfs_bdev.so.6.0 00:04:07.161 SO libspdk_accel_iaa.so.3.0 00:04:07.161 CC module/bdev/error/vbdev_error_rpc.o 00:04:07.161 LIB libspdk_bdev_delay.a 00:04:07.161 SYMLINK libspdk_accel_iaa.so 00:04:07.161 SO libspdk_bdev_delay.so.6.0 00:04:07.161 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:07.161 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:07.161 SYMLINK libspdk_blobfs_bdev.so 00:04:07.420 CC module/bdev/null/bdev_null.o 00:04:07.420 SYMLINK libspdk_bdev_delay.so 00:04:07.420 CC module/bdev/null/bdev_null_rpc.o 00:04:07.420 LIB libspdk_bdev_error.a 00:04:07.420 SO libspdk_bdev_error.so.6.0 00:04:07.420 CC module/bdev/nvme/bdev_nvme.o 00:04:07.420 CC module/bdev/passthru/vbdev_passthru.o 00:04:07.420 LIB libspdk_bdev_gpt.a 00:04:07.420 SO libspdk_bdev_gpt.so.6.0 00:04:07.420 SYMLINK libspdk_bdev_error.so 00:04:07.420 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:07.420 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:07.678 LIB libspdk_bdev_malloc.a 00:04:07.678 SYMLINK libspdk_bdev_gpt.so 00:04:07.678 SO libspdk_bdev_malloc.so.6.0 00:04:07.678 CC module/bdev/raid/bdev_raid.o 00:04:07.678 LIB libspdk_bdev_null.a 00:04:07.678 SYMLINK libspdk_bdev_malloc.so 00:04:07.678 SO libspdk_bdev_null.so.6.0 00:04:07.678 CC module/bdev/nvme/nvme_rpc.o 00:04:07.678 CC module/bdev/nvme/bdev_mdns_client.o 00:04:07.678 LIB libspdk_bdev_lvol.a 00:04:07.678 CC module/bdev/raid/bdev_raid_rpc.o 00:04:07.678 CC module/bdev/split/vbdev_split.o 00:04:07.678 SO libspdk_bdev_lvol.so.6.0 00:04:07.678 SYMLINK libspdk_bdev_null.so 00:04:07.937 SYMLINK libspdk_bdev_lvol.so 00:04:07.937 LIB libspdk_bdev_passthru.a 00:04:07.937 CC module/bdev/nvme/vbdev_opal.o 00:04:07.937 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:07.937 SO libspdk_bdev_passthru.so.6.0 00:04:07.937 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:07.937 SYMLINK libspdk_bdev_passthru.so 00:04:07.937 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:07.937 CC module/bdev/raid/bdev_raid_sb.o 00:04:07.937 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:07.937 CC module/bdev/split/vbdev_split_rpc.o 00:04:08.196 CC module/bdev/raid/raid0.o 00:04:08.196 LIB libspdk_bdev_split.a 00:04:08.196 SO libspdk_bdev_split.so.6.0 00:04:08.196 CC module/bdev/raid/raid1.o 00:04:08.455 CC module/bdev/raid/concat.o 00:04:08.455 CC module/bdev/uring/bdev_uring.o 00:04:08.455 CC module/bdev/aio/bdev_aio.o 00:04:08.455 CC module/bdev/ftl/bdev_ftl.o 00:04:08.455 SYMLINK libspdk_bdev_split.so 00:04:08.455 LIB libspdk_bdev_zone_block.a 00:04:08.455 SO libspdk_bdev_zone_block.so.6.0 00:04:08.455 SYMLINK libspdk_bdev_zone_block.so 00:04:08.455 CC module/bdev/uring/bdev_uring_rpc.o 00:04:08.455 CC module/bdev/iscsi/bdev_iscsi.o 00:04:08.713 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:08.713 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:08.713 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:08.713 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:08.713 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:08.713 CC module/bdev/aio/bdev_aio_rpc.o 00:04:08.713 LIB libspdk_bdev_uring.a 00:04:08.713 SO libspdk_bdev_uring.so.6.0 00:04:08.972 SYMLINK libspdk_bdev_uring.so 00:04:08.972 LIB libspdk_bdev_ftl.a 00:04:08.972 SO libspdk_bdev_ftl.so.6.0 00:04:08.972 LIB libspdk_bdev_aio.a 00:04:08.972 LIB libspdk_bdev_raid.a 00:04:08.972 SO libspdk_bdev_aio.so.6.0 00:04:08.972 LIB libspdk_bdev_iscsi.a 00:04:08.972 SYMLINK libspdk_bdev_ftl.so 00:04:08.972 SO libspdk_bdev_iscsi.so.6.0 00:04:08.972 SO libspdk_bdev_raid.so.6.0 00:04:08.972 SYMLINK libspdk_bdev_aio.so 00:04:09.231 SYMLINK libspdk_bdev_iscsi.so 00:04:09.231 SYMLINK libspdk_bdev_raid.so 00:04:09.231 LIB libspdk_bdev_virtio.a 00:04:09.231 SO libspdk_bdev_virtio.so.6.0 00:04:09.490 SYMLINK libspdk_bdev_virtio.so 00:04:10.428 LIB libspdk_bdev_nvme.a 00:04:10.688 SO libspdk_bdev_nvme.so.7.1 00:04:10.688 SYMLINK libspdk_bdev_nvme.so 00:04:11.254 CC module/event/subsystems/scheduler/scheduler.o 00:04:11.254 CC module/event/subsystems/vmd/vmd.o 00:04:11.254 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:11.254 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:04:11.254 CC module/event/subsystems/iobuf/iobuf.o 00:04:11.254 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:11.254 CC module/event/subsystems/keyring/keyring.o 00:04:11.254 CC module/event/subsystems/sock/sock.o 00:04:11.254 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:11.254 CC module/event/subsystems/fsdev/fsdev.o 00:04:11.513 LIB libspdk_event_vfu_tgt.a 00:04:11.513 LIB libspdk_event_sock.a 00:04:11.513 LIB libspdk_event_vhost_blk.a 00:04:11.513 LIB libspdk_event_scheduler.a 00:04:11.513 LIB libspdk_event_keyring.a 00:04:11.513 LIB libspdk_event_vmd.a 00:04:11.513 SO libspdk_event_vfu_tgt.so.3.0 00:04:11.513 SO libspdk_event_scheduler.so.4.0 00:04:11.513 SO libspdk_event_vhost_blk.so.3.0 00:04:11.513 SO libspdk_event_sock.so.5.0 00:04:11.513 LIB libspdk_event_fsdev.a 00:04:11.513 SO libspdk_event_keyring.so.1.0 00:04:11.513 LIB libspdk_event_iobuf.a 00:04:11.513 SO libspdk_event_vmd.so.6.0 00:04:11.513 SO libspdk_event_fsdev.so.1.0 00:04:11.513 SYMLINK libspdk_event_vfu_tgt.so 00:04:11.513 SO libspdk_event_iobuf.so.3.0 00:04:11.513 SYMLINK libspdk_event_scheduler.so 00:04:11.513 SYMLINK libspdk_event_sock.so 00:04:11.513 SYMLINK libspdk_event_vhost_blk.so 00:04:11.513 SYMLINK libspdk_event_keyring.so 00:04:11.513 SYMLINK libspdk_event_vmd.so 00:04:11.513 SYMLINK libspdk_event_fsdev.so 00:04:11.513 SYMLINK libspdk_event_iobuf.so 00:04:11.772 CC module/event/subsystems/accel/accel.o 00:04:12.031 LIB libspdk_event_accel.a 00:04:12.031 SO libspdk_event_accel.so.6.0 00:04:12.031 SYMLINK libspdk_event_accel.so 00:04:12.290 CC module/event/subsystems/bdev/bdev.o 00:04:12.549 LIB libspdk_event_bdev.a 00:04:12.549 SO libspdk_event_bdev.so.6.0 00:04:12.549 SYMLINK libspdk_event_bdev.so 00:04:12.808 CC module/event/subsystems/nbd/nbd.o 00:04:12.808 CC module/event/subsystems/ublk/ublk.o 00:04:12.808 CC module/event/subsystems/scsi/scsi.o 00:04:12.808 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:12.808 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:13.067 LIB libspdk_event_nbd.a 00:04:13.067 LIB libspdk_event_ublk.a 00:04:13.067 LIB libspdk_event_scsi.a 00:04:13.067 SO libspdk_event_ublk.so.3.0 00:04:13.067 SO libspdk_event_nbd.so.6.0 00:04:13.067 SO libspdk_event_scsi.so.6.0 00:04:13.067 SYMLINK libspdk_event_ublk.so 00:04:13.067 SYMLINK libspdk_event_nbd.so 00:04:13.067 SYMLINK libspdk_event_scsi.so 00:04:13.067 LIB libspdk_event_nvmf.a 00:04:13.067 SO libspdk_event_nvmf.so.6.0 00:04:13.326 SYMLINK libspdk_event_nvmf.so 00:04:13.326 CC module/event/subsystems/iscsi/iscsi.o 00:04:13.326 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:13.585 LIB libspdk_event_vhost_scsi.a 00:04:13.585 LIB libspdk_event_iscsi.a 00:04:13.585 SO libspdk_event_vhost_scsi.so.3.0 00:04:13.585 SO libspdk_event_iscsi.so.6.0 00:04:13.585 SYMLINK libspdk_event_vhost_scsi.so 00:04:13.585 SYMLINK libspdk_event_iscsi.so 00:04:13.845 SO libspdk.so.6.0 00:04:13.845 SYMLINK libspdk.so 00:04:14.104 CC app/spdk_lspci/spdk_lspci.o 00:04:14.104 CC app/trace_record/trace_record.o 00:04:14.104 CXX app/trace/trace.o 00:04:14.104 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:14.104 CC app/nvmf_tgt/nvmf_main.o 00:04:14.104 CC app/iscsi_tgt/iscsi_tgt.o 00:04:14.104 CC examples/ioat/perf/perf.o 00:04:14.104 CC app/spdk_tgt/spdk_tgt.o 00:04:14.104 CC examples/util/zipf/zipf.o 00:04:14.104 CC test/thread/poller_perf/poller_perf.o 00:04:14.104 LINK spdk_lspci 00:04:14.363 LINK nvmf_tgt 00:04:14.363 LINK interrupt_tgt 00:04:14.363 LINK poller_perf 00:04:14.363 LINK zipf 00:04:14.363 LINK spdk_trace_record 00:04:14.363 LINK iscsi_tgt 00:04:14.363 LINK ioat_perf 00:04:14.363 LINK spdk_tgt 00:04:14.363 CC app/spdk_nvme_perf/perf.o 00:04:14.622 LINK spdk_trace 00:04:14.622 CC app/spdk_nvme_identify/identify.o 00:04:14.622 CC app/spdk_top/spdk_top.o 00:04:14.622 CC app/spdk_nvme_discover/discovery_aer.o 00:04:14.622 CC examples/ioat/verify/verify.o 00:04:14.622 CC app/spdk_dd/spdk_dd.o 00:04:14.881 CC test/dma/test_dma/test_dma.o 00:04:14.881 TEST_HEADER include/spdk/accel.h 00:04:14.881 TEST_HEADER include/spdk/accel_module.h 00:04:14.881 TEST_HEADER include/spdk/assert.h 00:04:14.881 TEST_HEADER include/spdk/barrier.h 00:04:14.881 TEST_HEADER include/spdk/base64.h 00:04:14.881 TEST_HEADER include/spdk/bdev.h 00:04:14.881 TEST_HEADER include/spdk/bdev_module.h 00:04:14.881 TEST_HEADER include/spdk/bdev_zone.h 00:04:14.881 TEST_HEADER include/spdk/bit_array.h 00:04:14.881 TEST_HEADER include/spdk/bit_pool.h 00:04:14.881 TEST_HEADER include/spdk/blob_bdev.h 00:04:14.881 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:14.881 TEST_HEADER include/spdk/blobfs.h 00:04:14.881 TEST_HEADER include/spdk/blob.h 00:04:14.881 TEST_HEADER include/spdk/conf.h 00:04:14.881 TEST_HEADER include/spdk/config.h 00:04:14.881 TEST_HEADER include/spdk/cpuset.h 00:04:14.881 TEST_HEADER include/spdk/crc16.h 00:04:14.881 TEST_HEADER include/spdk/crc32.h 00:04:14.881 TEST_HEADER include/spdk/crc64.h 00:04:14.881 TEST_HEADER include/spdk/dif.h 00:04:14.881 TEST_HEADER include/spdk/dma.h 00:04:14.881 TEST_HEADER include/spdk/endian.h 00:04:14.881 TEST_HEADER include/spdk/env_dpdk.h 00:04:14.881 TEST_HEADER include/spdk/env.h 00:04:14.881 TEST_HEADER include/spdk/event.h 00:04:14.881 TEST_HEADER include/spdk/fd_group.h 00:04:14.881 TEST_HEADER include/spdk/fd.h 00:04:14.881 TEST_HEADER include/spdk/file.h 00:04:14.881 TEST_HEADER include/spdk/fsdev.h 00:04:14.881 TEST_HEADER include/spdk/fsdev_module.h 00:04:14.881 CC test/app/bdev_svc/bdev_svc.o 00:04:14.881 TEST_HEADER include/spdk/ftl.h 00:04:14.881 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:14.881 TEST_HEADER include/spdk/gpt_spec.h 00:04:14.881 TEST_HEADER include/spdk/hexlify.h 00:04:14.881 TEST_HEADER include/spdk/histogram_data.h 00:04:14.881 TEST_HEADER include/spdk/idxd.h 00:04:14.881 TEST_HEADER include/spdk/idxd_spec.h 00:04:14.881 TEST_HEADER include/spdk/init.h 00:04:14.881 LINK spdk_nvme_discover 00:04:14.881 TEST_HEADER include/spdk/ioat.h 00:04:14.881 TEST_HEADER include/spdk/ioat_spec.h 00:04:14.881 TEST_HEADER include/spdk/iscsi_spec.h 00:04:14.881 TEST_HEADER include/spdk/json.h 00:04:14.881 TEST_HEADER include/spdk/jsonrpc.h 00:04:14.881 TEST_HEADER include/spdk/keyring.h 00:04:14.881 TEST_HEADER include/spdk/keyring_module.h 00:04:14.881 TEST_HEADER include/spdk/likely.h 00:04:14.881 TEST_HEADER include/spdk/log.h 00:04:14.881 TEST_HEADER include/spdk/lvol.h 00:04:14.881 TEST_HEADER include/spdk/md5.h 00:04:14.881 TEST_HEADER include/spdk/memory.h 00:04:14.881 TEST_HEADER include/spdk/mmio.h 00:04:14.881 TEST_HEADER include/spdk/nbd.h 00:04:14.881 TEST_HEADER include/spdk/net.h 00:04:14.881 TEST_HEADER include/spdk/notify.h 00:04:14.881 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:14.881 TEST_HEADER include/spdk/nvme.h 00:04:14.881 TEST_HEADER include/spdk/nvme_intel.h 00:04:14.881 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:14.881 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:14.881 TEST_HEADER include/spdk/nvme_spec.h 00:04:14.881 TEST_HEADER include/spdk/nvme_zns.h 00:04:14.881 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:14.881 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:14.881 TEST_HEADER include/spdk/nvmf.h 00:04:14.881 TEST_HEADER include/spdk/nvmf_spec.h 00:04:14.881 TEST_HEADER include/spdk/nvmf_transport.h 00:04:14.881 TEST_HEADER include/spdk/opal.h 00:04:14.881 TEST_HEADER include/spdk/opal_spec.h 00:04:14.881 TEST_HEADER include/spdk/pci_ids.h 00:04:14.881 TEST_HEADER include/spdk/pipe.h 00:04:14.881 TEST_HEADER include/spdk/queue.h 00:04:14.881 TEST_HEADER include/spdk/reduce.h 00:04:14.881 LINK verify 00:04:14.881 TEST_HEADER include/spdk/rpc.h 00:04:14.881 TEST_HEADER include/spdk/scheduler.h 00:04:14.881 TEST_HEADER include/spdk/scsi.h 00:04:14.881 TEST_HEADER include/spdk/scsi_spec.h 00:04:14.881 TEST_HEADER include/spdk/sock.h 00:04:14.881 TEST_HEADER include/spdk/stdinc.h 00:04:14.881 TEST_HEADER include/spdk/string.h 00:04:14.881 TEST_HEADER include/spdk/thread.h 00:04:14.881 TEST_HEADER include/spdk/trace.h 00:04:14.881 TEST_HEADER include/spdk/trace_parser.h 00:04:14.881 TEST_HEADER include/spdk/tree.h 00:04:14.881 TEST_HEADER include/spdk/ublk.h 00:04:14.881 TEST_HEADER include/spdk/util.h 00:04:14.881 TEST_HEADER include/spdk/uuid.h 00:04:14.881 TEST_HEADER include/spdk/version.h 00:04:14.881 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:14.881 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:14.881 TEST_HEADER include/spdk/vhost.h 00:04:14.881 TEST_HEADER include/spdk/vmd.h 00:04:15.141 TEST_HEADER include/spdk/xor.h 00:04:15.141 TEST_HEADER include/spdk/zipf.h 00:04:15.141 CXX test/cpp_headers/accel.o 00:04:15.141 LINK bdev_svc 00:04:15.141 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:15.141 CXX test/cpp_headers/accel_module.o 00:04:15.400 CXX test/cpp_headers/assert.o 00:04:15.400 LINK spdk_dd 00:04:15.400 CC examples/thread/thread/thread_ex.o 00:04:15.400 LINK test_dma 00:04:15.400 CXX test/cpp_headers/barrier.o 00:04:15.400 LINK nvme_fuzz 00:04:15.400 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:15.400 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:15.658 CXX test/cpp_headers/base64.o 00:04:15.658 LINK thread 00:04:15.658 LINK spdk_nvme_identify 00:04:15.658 LINK spdk_nvme_perf 00:04:15.917 CC app/fio/nvme/fio_plugin.o 00:04:15.917 CXX test/cpp_headers/bdev.o 00:04:15.917 CC app/vhost/vhost.o 00:04:15.917 LINK spdk_top 00:04:15.917 CC examples/sock/hello_world/hello_sock.o 00:04:15.917 CXX test/cpp_headers/bdev_module.o 00:04:16.176 LINK vhost 00:04:16.176 CC test/event/event_perf/event_perf.o 00:04:16.176 CC examples/vmd/lsvmd/lsvmd.o 00:04:16.176 LINK vhost_fuzz 00:04:16.176 CC examples/vmd/led/led.o 00:04:16.176 CC test/env/mem_callbacks/mem_callbacks.o 00:04:16.176 LINK hello_sock 00:04:16.176 LINK lsvmd 00:04:16.176 CXX test/cpp_headers/bdev_zone.o 00:04:16.176 LINK event_perf 00:04:16.176 LINK led 00:04:16.435 CC test/rpc_client/rpc_client_test.o 00:04:16.435 CXX test/cpp_headers/bit_array.o 00:04:16.435 CC test/nvme/aer/aer.o 00:04:16.435 CC test/app/histogram_perf/histogram_perf.o 00:04:16.435 CC test/event/reactor/reactor.o 00:04:16.435 LINK spdk_nvme 00:04:16.693 LINK rpc_client_test 00:04:16.693 CC test/accel/dif/dif.o 00:04:16.693 CC examples/idxd/perf/perf.o 00:04:16.693 CXX test/cpp_headers/bit_pool.o 00:04:16.693 LINK reactor 00:04:16.693 LINK histogram_perf 00:04:16.693 CC app/fio/bdev/fio_plugin.o 00:04:16.951 CXX test/cpp_headers/blob_bdev.o 00:04:16.951 LINK aer 00:04:16.951 LINK mem_callbacks 00:04:16.951 CC test/event/reactor_perf/reactor_perf.o 00:04:16.951 CC test/event/app_repeat/app_repeat.o 00:04:16.951 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:16.951 CXX test/cpp_headers/blobfs_bdev.o 00:04:16.951 LINK idxd_perf 00:04:16.951 LINK reactor_perf 00:04:16.951 CC test/env/vtophys/vtophys.o 00:04:17.215 CC test/nvme/reset/reset.o 00:04:17.215 LINK app_repeat 00:04:17.215 CXX test/cpp_headers/blobfs.o 00:04:17.215 LINK vtophys 00:04:17.215 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:17.215 LINK hello_fsdev 00:04:17.215 CC test/env/memory/memory_ut.o 00:04:17.493 LINK iscsi_fuzz 00:04:17.493 LINK spdk_bdev 00:04:17.493 CXX test/cpp_headers/blob.o 00:04:17.493 LINK reset 00:04:17.493 CC test/event/scheduler/scheduler.o 00:04:17.493 LINK env_dpdk_post_init 00:04:17.493 CXX test/cpp_headers/conf.o 00:04:17.493 LINK dif 00:04:17.493 CXX test/cpp_headers/config.o 00:04:17.773 CC examples/accel/perf/accel_perf.o 00:04:17.773 CC test/env/pci/pci_ut.o 00:04:17.773 CC test/app/jsoncat/jsoncat.o 00:04:17.773 CC test/nvme/sgl/sgl.o 00:04:17.773 CXX test/cpp_headers/cpuset.o 00:04:17.773 LINK scheduler 00:04:17.773 CXX test/cpp_headers/crc16.o 00:04:17.773 CC test/app/stub/stub.o 00:04:17.773 CC examples/blob/hello_world/hello_blob.o 00:04:17.773 LINK jsoncat 00:04:18.040 CXX test/cpp_headers/crc32.o 00:04:18.040 LINK stub 00:04:18.040 LINK sgl 00:04:18.040 CC examples/blob/cli/blobcli.o 00:04:18.040 LINK hello_blob 00:04:18.040 CXX test/cpp_headers/crc64.o 00:04:18.040 CXX test/cpp_headers/dif.o 00:04:18.040 CC test/blobfs/mkfs/mkfs.o 00:04:18.040 LINK pci_ut 00:04:18.298 CC examples/nvme/hello_world/hello_world.o 00:04:18.298 LINK accel_perf 00:04:18.298 CC test/nvme/e2edp/nvme_dp.o 00:04:18.298 CXX test/cpp_headers/dma.o 00:04:18.298 CXX test/cpp_headers/endian.o 00:04:18.298 CC test/nvme/overhead/overhead.o 00:04:18.298 LINK mkfs 00:04:18.555 LINK hello_world 00:04:18.555 CXX test/cpp_headers/env_dpdk.o 00:04:18.555 CC test/nvme/err_injection/err_injection.o 00:04:18.555 LINK nvme_dp 00:04:18.555 LINK blobcli 00:04:18.813 CC test/lvol/esnap/esnap.o 00:04:18.813 CC examples/bdev/hello_world/hello_bdev.o 00:04:18.813 CXX test/cpp_headers/env.o 00:04:18.813 LINK overhead 00:04:18.813 LINK memory_ut 00:04:18.813 CC examples/nvme/reconnect/reconnect.o 00:04:18.813 LINK err_injection 00:04:18.813 CC test/bdev/bdevio/bdevio.o 00:04:18.813 CXX test/cpp_headers/event.o 00:04:19.071 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:19.071 CC examples/bdev/bdevperf/bdevperf.o 00:04:19.071 LINK hello_bdev 00:04:19.071 CC test/nvme/startup/startup.o 00:04:19.071 CC examples/nvme/arbitration/arbitration.o 00:04:19.071 CC test/nvme/reserve/reserve.o 00:04:19.071 CXX test/cpp_headers/fd_group.o 00:04:19.071 LINK reconnect 00:04:19.329 LINK startup 00:04:19.329 CXX test/cpp_headers/fd.o 00:04:19.329 CC test/nvme/simple_copy/simple_copy.o 00:04:19.329 LINK bdevio 00:04:19.329 LINK reserve 00:04:19.329 CC test/nvme/connect_stress/connect_stress.o 00:04:19.329 LINK arbitration 00:04:19.329 CC test/nvme/boot_partition/boot_partition.o 00:04:19.587 CXX test/cpp_headers/file.o 00:04:19.587 CC test/nvme/compliance/nvme_compliance.o 00:04:19.587 LINK simple_copy 00:04:19.587 LINK connect_stress 00:04:19.587 LINK nvme_manage 00:04:19.587 CXX test/cpp_headers/fsdev.o 00:04:19.587 LINK boot_partition 00:04:19.587 CC test/nvme/fused_ordering/fused_ordering.o 00:04:19.587 CC examples/nvme/hotplug/hotplug.o 00:04:19.845 CXX test/cpp_headers/fsdev_module.o 00:04:19.845 CXX test/cpp_headers/ftl.o 00:04:19.845 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:19.845 CC test/nvme/cuse/cuse.o 00:04:19.845 CC test/nvme/fdp/fdp.o 00:04:19.845 LINK fused_ordering 00:04:20.104 LINK hotplug 00:04:20.104 LINK nvme_compliance 00:04:20.104 LINK bdevperf 00:04:20.104 CXX test/cpp_headers/fuse_dispatcher.o 00:04:20.104 LINK doorbell_aers 00:04:20.104 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:20.104 CXX test/cpp_headers/gpt_spec.o 00:04:20.104 CC examples/nvme/abort/abort.o 00:04:20.104 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:20.363 CXX test/cpp_headers/hexlify.o 00:04:20.363 CXX test/cpp_headers/histogram_data.o 00:04:20.363 CXX test/cpp_headers/idxd.o 00:04:20.363 LINK fdp 00:04:20.363 CXX test/cpp_headers/idxd_spec.o 00:04:20.363 LINK cmb_copy 00:04:20.363 CXX test/cpp_headers/init.o 00:04:20.363 LINK pmr_persistence 00:04:20.363 CXX test/cpp_headers/ioat.o 00:04:20.363 CXX test/cpp_headers/ioat_spec.o 00:04:20.621 CXX test/cpp_headers/iscsi_spec.o 00:04:20.621 CXX test/cpp_headers/json.o 00:04:20.621 CXX test/cpp_headers/jsonrpc.o 00:04:20.621 CXX test/cpp_headers/keyring.o 00:04:20.621 CXX test/cpp_headers/keyring_module.o 00:04:20.621 LINK abort 00:04:20.621 CXX test/cpp_headers/likely.o 00:04:20.621 CXX test/cpp_headers/log.o 00:04:20.621 CXX test/cpp_headers/lvol.o 00:04:20.621 CXX test/cpp_headers/md5.o 00:04:20.621 CXX test/cpp_headers/memory.o 00:04:20.880 CXX test/cpp_headers/mmio.o 00:04:20.880 CXX test/cpp_headers/nbd.o 00:04:20.880 CXX test/cpp_headers/net.o 00:04:20.880 CXX test/cpp_headers/notify.o 00:04:20.880 CXX test/cpp_headers/nvme.o 00:04:20.880 CXX test/cpp_headers/nvme_intel.o 00:04:20.880 CXX test/cpp_headers/nvme_ocssd.o 00:04:20.880 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:20.880 CXX test/cpp_headers/nvme_spec.o 00:04:21.138 CXX test/cpp_headers/nvme_zns.o 00:04:21.138 CXX test/cpp_headers/nvmf_cmd.o 00:04:21.138 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:21.138 CC examples/nvmf/nvmf/nvmf.o 00:04:21.138 CXX test/cpp_headers/nvmf.o 00:04:21.138 CXX test/cpp_headers/nvmf_spec.o 00:04:21.138 CXX test/cpp_headers/nvmf_transport.o 00:04:21.138 CXX test/cpp_headers/opal.o 00:04:21.138 CXX test/cpp_headers/opal_spec.o 00:04:21.138 CXX test/cpp_headers/pci_ids.o 00:04:21.138 CXX test/cpp_headers/pipe.o 00:04:21.138 CXX test/cpp_headers/queue.o 00:04:21.396 CXX test/cpp_headers/reduce.o 00:04:21.396 CXX test/cpp_headers/rpc.o 00:04:21.396 CXX test/cpp_headers/scheduler.o 00:04:21.396 CXX test/cpp_headers/scsi.o 00:04:21.396 LINK nvmf 00:04:21.396 CXX test/cpp_headers/scsi_spec.o 00:04:21.396 CXX test/cpp_headers/sock.o 00:04:21.396 CXX test/cpp_headers/stdinc.o 00:04:21.396 CXX test/cpp_headers/string.o 00:04:21.396 CXX test/cpp_headers/thread.o 00:04:21.396 LINK cuse 00:04:21.654 CXX test/cpp_headers/trace.o 00:04:21.654 CXX test/cpp_headers/trace_parser.o 00:04:21.654 CXX test/cpp_headers/tree.o 00:04:21.654 CXX test/cpp_headers/ublk.o 00:04:21.654 CXX test/cpp_headers/util.o 00:04:21.654 CXX test/cpp_headers/uuid.o 00:04:21.654 CXX test/cpp_headers/version.o 00:04:21.654 CXX test/cpp_headers/vfio_user_pci.o 00:04:21.654 CXX test/cpp_headers/vfio_user_spec.o 00:04:21.654 CXX test/cpp_headers/vhost.o 00:04:21.654 CXX test/cpp_headers/vmd.o 00:04:21.654 CXX test/cpp_headers/xor.o 00:04:21.654 CXX test/cpp_headers/zipf.o 00:04:25.845 LINK esnap 00:04:25.845 00:04:25.845 real 1m36.390s 00:04:25.845 user 9m12.296s 00:04:25.845 sys 1m37.357s 00:04:25.845 ************************************ 00:04:25.845 END TEST make 00:04:25.845 ************************************ 00:04:25.845 01:24:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:25.845 01:24:33 make -- common/autotest_common.sh@10 -- $ set +x 00:04:25.845 01:24:34 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:25.845 01:24:34 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:25.846 01:24:34 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:25.846 01:24:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.846 01:24:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:25.846 01:24:34 -- pm/common@44 -- $ pid=5296 00:04:25.846 01:24:34 -- pm/common@50 -- $ kill -TERM 5296 00:04:25.846 01:24:34 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.846 01:24:34 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:25.846 01:24:34 -- pm/common@44 -- $ pid=5298 00:04:25.846 01:24:34 -- pm/common@50 -- $ kill -TERM 5298 00:04:25.846 01:24:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:25.846 01:24:34 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:25.846 01:24:34 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:25.846 01:24:34 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:25.846 01:24:34 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:25.846 01:24:34 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:25.846 01:24:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:25.846 01:24:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:25.846 01:24:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:25.846 01:24:34 -- scripts/common.sh@336 -- # IFS=.-: 00:04:25.846 01:24:34 -- scripts/common.sh@336 -- # read -ra ver1 00:04:25.846 01:24:34 -- scripts/common.sh@337 -- # IFS=.-: 00:04:25.846 01:24:34 -- scripts/common.sh@337 -- # read -ra ver2 00:04:25.846 01:24:34 -- scripts/common.sh@338 -- # local 'op=<' 00:04:25.846 01:24:34 -- scripts/common.sh@340 -- # ver1_l=2 00:04:25.846 01:24:34 -- scripts/common.sh@341 -- # ver2_l=1 00:04:25.846 01:24:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:25.846 01:24:34 -- scripts/common.sh@344 -- # case "$op" in 00:04:25.846 01:24:34 -- scripts/common.sh@345 -- # : 1 00:04:25.846 01:24:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:25.846 01:24:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:25.846 01:24:34 -- scripts/common.sh@365 -- # decimal 1 00:04:25.846 01:24:34 -- scripts/common.sh@353 -- # local d=1 00:04:25.846 01:24:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:25.846 01:24:34 -- scripts/common.sh@355 -- # echo 1 00:04:25.846 01:24:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:25.846 01:24:34 -- scripts/common.sh@366 -- # decimal 2 00:04:25.846 01:24:34 -- scripts/common.sh@353 -- # local d=2 00:04:25.846 01:24:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:25.846 01:24:34 -- scripts/common.sh@355 -- # echo 2 00:04:25.846 01:24:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:25.846 01:24:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:25.846 01:24:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:25.846 01:24:34 -- scripts/common.sh@368 -- # return 0 00:04:25.846 01:24:34 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:25.846 01:24:34 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.846 --rc genhtml_branch_coverage=1 00:04:25.846 --rc genhtml_function_coverage=1 00:04:25.846 --rc genhtml_legend=1 00:04:25.846 --rc geninfo_all_blocks=1 00:04:25.846 --rc geninfo_unexecuted_blocks=1 00:04:25.846 00:04:25.846 ' 00:04:25.846 01:24:34 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.846 --rc genhtml_branch_coverage=1 00:04:25.846 --rc genhtml_function_coverage=1 00:04:25.846 --rc genhtml_legend=1 00:04:25.846 --rc geninfo_all_blocks=1 00:04:25.846 --rc geninfo_unexecuted_blocks=1 00:04:25.846 00:04:25.846 ' 00:04:25.846 01:24:34 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.846 --rc genhtml_branch_coverage=1 00:04:25.846 --rc genhtml_function_coverage=1 00:04:25.846 --rc genhtml_legend=1 00:04:25.846 --rc geninfo_all_blocks=1 00:04:25.846 --rc geninfo_unexecuted_blocks=1 00:04:25.846 00:04:25.846 ' 00:04:25.846 01:24:34 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:25.846 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:25.846 --rc genhtml_branch_coverage=1 00:04:25.846 --rc genhtml_function_coverage=1 00:04:25.846 --rc genhtml_legend=1 00:04:25.846 --rc geninfo_all_blocks=1 00:04:25.846 --rc geninfo_unexecuted_blocks=1 00:04:25.846 00:04:25.846 ' 00:04:25.846 01:24:34 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.846 01:24:34 -- nvmf/common.sh@7 -- # uname -s 00:04:25.846 01:24:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:25.846 01:24:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:25.846 01:24:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:25.846 01:24:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:25.846 01:24:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:25.846 01:24:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:25.846 01:24:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:25.846 01:24:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:25.846 01:24:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:25.846 01:24:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:25.846 01:24:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:04:25.846 01:24:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:04:25.846 01:24:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:25.846 01:24:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:25.846 01:24:34 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:04:25.846 01:24:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:25.846 01:24:34 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:25.846 01:24:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:25.846 01:24:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:25.846 01:24:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:25.846 01:24:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:25.846 01:24:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.846 01:24:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.846 01:24:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.846 01:24:34 -- paths/export.sh@5 -- # export PATH 00:04:25.846 01:24:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:25.846 01:24:34 -- nvmf/common.sh@51 -- # : 0 00:04:25.846 01:24:34 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:25.846 01:24:34 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:25.846 01:24:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:25.846 01:24:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:25.846 01:24:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:25.846 01:24:34 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:25.846 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:25.846 01:24:34 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:25.846 01:24:34 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:25.846 01:24:34 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:25.846 01:24:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:25.846 01:24:34 -- spdk/autotest.sh@32 -- # uname -s 00:04:25.846 01:24:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:25.846 01:24:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:25.846 01:24:34 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:25.846 01:24:34 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:25.846 01:24:34 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:25.846 01:24:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:25.846 01:24:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:25.846 01:24:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:25.846 01:24:34 -- spdk/autotest.sh@48 -- # udevadm_pid=54998 00:04:25.846 01:24:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:25.846 01:24:34 -- pm/common@17 -- # local monitor 00:04:25.846 01:24:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:25.846 01:24:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:25.846 01:24:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:26.105 01:24:34 -- pm/common@25 -- # sleep 1 00:04:26.105 01:24:34 -- pm/common@21 -- # date +%s 00:04:26.105 01:24:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731806674 00:04:26.105 01:24:34 -- pm/common@21 -- # date +%s 00:04:26.105 01:24:34 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1731806674 00:04:26.105 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731806674_collect-vmstat.pm.log 00:04:26.105 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1731806674_collect-cpu-load.pm.log 00:04:27.040 01:24:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:27.040 01:24:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:27.040 01:24:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:27.040 01:24:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.040 01:24:35 -- spdk/autotest.sh@59 -- # create_test_list 00:04:27.040 01:24:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:27.040 01:24:35 -- common/autotest_common.sh@10 -- # set +x 00:04:27.040 01:24:35 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:27.040 01:24:35 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:27.040 01:24:35 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:27.040 01:24:35 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:27.040 01:24:35 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:27.040 01:24:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:27.040 01:24:35 -- common/autotest_common.sh@1457 -- # uname 00:04:27.040 01:24:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:27.040 01:24:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:27.040 01:24:35 -- common/autotest_common.sh@1477 -- # uname 00:04:27.040 01:24:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:27.040 01:24:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:27.040 01:24:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:27.040 lcov: LCOV version 1.15 00:04:27.040 01:24:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:45.128 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:45.128 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:00.013 01:25:06 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:00.013 01:25:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.013 01:25:06 -- common/autotest_common.sh@10 -- # set +x 00:05:00.013 01:25:06 -- spdk/autotest.sh@78 -- # rm -f 00:05:00.013 01:25:06 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:00.013 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:00.013 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:00.013 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:00.013 01:25:07 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:00.013 01:25:07 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:00.013 01:25:07 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:00.013 01:25:07 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:00.013 01:25:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:00.013 01:25:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:00.013 01:25:07 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:00.013 01:25:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:00.013 01:25:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:00.013 01:25:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:00.013 01:25:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:00.013 01:25:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:00.013 01:25:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:00.013 01:25:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:00.013 01:25:07 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:00.013 01:25:07 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:00.013 01:25:07 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:00.013 01:25:07 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:00.013 01:25:07 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:00.013 01:25:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.013 01:25:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.013 01:25:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:00.013 01:25:07 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:00.013 01:25:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:00.013 No valid GPT data, bailing 00:05:00.013 01:25:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:00.013 01:25:07 -- scripts/common.sh@394 -- # pt= 00:05:00.013 01:25:07 -- scripts/common.sh@395 -- # return 1 00:05:00.013 01:25:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:00.013 1+0 records in 00:05:00.013 1+0 records out 00:05:00.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00566498 s, 185 MB/s 00:05:00.013 01:25:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.013 01:25:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.013 01:25:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:00.013 01:25:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:00.013 01:25:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:00.013 No valid GPT data, bailing 00:05:00.013 01:25:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:00.013 01:25:07 -- scripts/common.sh@394 -- # pt= 00:05:00.013 01:25:07 -- scripts/common.sh@395 -- # return 1 00:05:00.013 01:25:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:00.013 1+0 records in 00:05:00.013 1+0 records out 00:05:00.013 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00371055 s, 283 MB/s 00:05:00.013 01:25:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.013 01:25:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.013 01:25:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:00.013 01:25:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:00.013 01:25:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:00.013 No valid GPT data, bailing 00:05:00.014 01:25:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:00.014 01:25:07 -- scripts/common.sh@394 -- # pt= 00:05:00.014 01:25:07 -- scripts/common.sh@395 -- # return 1 00:05:00.014 01:25:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:00.014 1+0 records in 00:05:00.014 1+0 records out 00:05:00.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00336736 s, 311 MB/s 00:05:00.014 01:25:07 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:00.014 01:25:07 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:00.014 01:25:07 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:00.014 01:25:07 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:00.014 01:25:07 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:00.014 No valid GPT data, bailing 00:05:00.014 01:25:07 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:00.014 01:25:07 -- scripts/common.sh@394 -- # pt= 00:05:00.014 01:25:07 -- scripts/common.sh@395 -- # return 1 00:05:00.014 01:25:07 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:00.014 1+0 records in 00:05:00.014 1+0 records out 00:05:00.014 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387136 s, 271 MB/s 00:05:00.014 01:25:07 -- spdk/autotest.sh@105 -- # sync 00:05:00.014 01:25:07 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:00.014 01:25:07 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:00.014 01:25:07 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:01.390 01:25:09 -- spdk/autotest.sh@111 -- # uname -s 00:05:01.390 01:25:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:01.390 01:25:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:01.390 01:25:09 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:01.958 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:01.958 Hugepages 00:05:01.958 node hugesize free / total 00:05:01.958 node0 1048576kB 0 / 0 00:05:01.958 node0 2048kB 0 / 0 00:05:01.958 00:05:01.958 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:01.958 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:01.958 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:01.958 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:01.958 01:25:10 -- spdk/autotest.sh@117 -- # uname -s 00:05:01.958 01:25:10 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:01.958 01:25:10 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:01.958 01:25:10 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:02.894 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:02.894 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.894 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:02.894 01:25:11 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:03.832 01:25:12 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:03.832 01:25:12 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:03.832 01:25:12 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:03.832 01:25:12 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:03.832 01:25:12 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:03.832 01:25:12 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:03.832 01:25:12 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:03.832 01:25:12 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:03.832 01:25:12 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:04.091 01:25:12 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:04.091 01:25:12 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:04.091 01:25:12 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:04.350 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:04.350 Waiting for block devices as requested 00:05:04.350 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.350 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:04.610 01:25:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.610 01:25:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:04.610 01:25:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:04.610 01:25:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.610 01:25:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.610 01:25:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1543 -- # continue 00:05:04.610 01:25:12 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:04.610 01:25:12 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:04.610 01:25:12 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:04.610 01:25:12 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:04.610 01:25:12 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:04.610 01:25:12 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:04.610 01:25:12 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:04.610 01:25:12 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:04.610 01:25:12 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:04.610 01:25:12 -- common/autotest_common.sh@1543 -- # continue 00:05:04.610 01:25:12 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:04.610 01:25:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:04.610 01:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:04.610 01:25:12 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:04.610 01:25:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:04.610 01:25:12 -- common/autotest_common.sh@10 -- # set +x 00:05:04.610 01:25:12 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:05.178 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:05.441 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.441 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:05.441 01:25:13 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:05.441 01:25:13 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:05.441 01:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.441 01:25:13 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:05.441 01:25:13 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:05.441 01:25:13 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:05.441 01:25:13 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:05.441 01:25:13 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:05.441 01:25:13 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:05.441 01:25:13 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:05.441 01:25:13 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:05.441 01:25:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:05.441 01:25:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:05.441 01:25:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:05.441 01:25:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:05.441 01:25:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:05.720 01:25:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:05.720 01:25:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:05.720 01:25:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:05.720 01:25:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:05.720 01:25:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:05.720 01:25:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.720 01:25:13 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:05.720 01:25:13 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:05.720 01:25:13 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:05.720 01:25:13 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:05.720 01:25:13 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:05.720 01:25:13 -- common/autotest_common.sh@1572 -- # return 0 00:05:05.720 01:25:13 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:05.720 01:25:13 -- common/autotest_common.sh@1580 -- # return 0 00:05:05.720 01:25:13 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:05.720 01:25:13 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:05.720 01:25:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.720 01:25:13 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:05.720 01:25:13 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:05.720 01:25:13 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:05.720 01:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.720 01:25:13 -- spdk/autotest.sh@151 -- # [[ 1 -eq 1 ]] 00:05:05.720 01:25:13 -- spdk/autotest.sh@152 -- # export SPDK_SOCK_IMPL_DEFAULT=uring 00:05:05.720 01:25:13 -- spdk/autotest.sh@152 -- # SPDK_SOCK_IMPL_DEFAULT=uring 00:05:05.720 01:25:13 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.720 01:25:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.720 01:25:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.720 01:25:13 -- common/autotest_common.sh@10 -- # set +x 00:05:05.720 ************************************ 00:05:05.720 START TEST env 00:05:05.720 ************************************ 00:05:05.720 01:25:13 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:05.720 * Looking for test storage... 00:05:05.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:05.720 01:25:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:05.720 01:25:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:05.720 01:25:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:05.720 01:25:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.720 01:25:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:05.720 01:25:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:05.720 01:25:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:05.720 01:25:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:05.720 01:25:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:05.720 01:25:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:05.720 01:25:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:05.720 01:25:14 env -- scripts/common.sh@344 -- # case "$op" in 00:05:05.720 01:25:14 env -- scripts/common.sh@345 -- # : 1 00:05:05.720 01:25:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:05.720 01:25:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.720 01:25:14 env -- scripts/common.sh@365 -- # decimal 1 00:05:05.720 01:25:14 env -- scripts/common.sh@353 -- # local d=1 00:05:05.720 01:25:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.720 01:25:14 env -- scripts/common.sh@355 -- # echo 1 00:05:05.720 01:25:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:05.720 01:25:14 env -- scripts/common.sh@366 -- # decimal 2 00:05:05.720 01:25:14 env -- scripts/common.sh@353 -- # local d=2 00:05:05.720 01:25:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.720 01:25:14 env -- scripts/common.sh@355 -- # echo 2 00:05:05.720 01:25:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:05.720 01:25:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:05.720 01:25:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:05.720 01:25:14 env -- scripts/common.sh@368 -- # return 0 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.720 --rc genhtml_branch_coverage=1 00:05:05.720 --rc genhtml_function_coverage=1 00:05:05.720 --rc genhtml_legend=1 00:05:05.720 --rc geninfo_all_blocks=1 00:05:05.720 --rc geninfo_unexecuted_blocks=1 00:05:05.720 00:05:05.720 ' 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.720 --rc genhtml_branch_coverage=1 00:05:05.720 --rc genhtml_function_coverage=1 00:05:05.720 --rc genhtml_legend=1 00:05:05.720 --rc geninfo_all_blocks=1 00:05:05.720 --rc geninfo_unexecuted_blocks=1 00:05:05.720 00:05:05.720 ' 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.720 --rc genhtml_branch_coverage=1 00:05:05.720 --rc genhtml_function_coverage=1 00:05:05.720 --rc genhtml_legend=1 00:05:05.720 --rc geninfo_all_blocks=1 00:05:05.720 --rc geninfo_unexecuted_blocks=1 00:05:05.720 00:05:05.720 ' 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:05.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.720 --rc genhtml_branch_coverage=1 00:05:05.720 --rc genhtml_function_coverage=1 00:05:05.720 --rc genhtml_legend=1 00:05:05.720 --rc geninfo_all_blocks=1 00:05:05.720 --rc geninfo_unexecuted_blocks=1 00:05:05.720 00:05:05.720 ' 00:05:05.720 01:25:14 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:05.720 01:25:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:05.720 01:25:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:05.720 ************************************ 00:05:05.720 START TEST env_memory 00:05:05.720 ************************************ 00:05:05.720 01:25:14 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:05.720 00:05:05.720 00:05:05.720 CUnit - A unit testing framework for C - Version 2.1-3 00:05:05.721 http://cunit.sourceforge.net/ 00:05:05.721 00:05:05.721 00:05:05.721 Suite: memory 00:05:05.989 Test: alloc and free memory map ...[2024-11-17 01:25:14.212378] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:05.990 passed 00:05:05.990 Test: mem map translation ...[2024-11-17 01:25:14.273250] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:05.990 [2024-11-17 01:25:14.273327] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:05.990 [2024-11-17 01:25:14.273437] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:05.990 [2024-11-17 01:25:14.273469] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:05.990 passed 00:05:05.990 Test: mem map registration ...[2024-11-17 01:25:14.371499] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:05.990 [2024-11-17 01:25:14.371570] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:05.990 passed 00:05:06.249 Test: mem map adjacent registrations ...passed 00:05:06.249 00:05:06.249 Run Summary: Type Total Ran Passed Failed Inactive 00:05:06.249 suites 1 1 n/a 0 0 00:05:06.249 tests 4 4 4 0 0 00:05:06.249 asserts 152 152 152 0 n/a 00:05:06.249 00:05:06.249 Elapsed time = 0.345 seconds 00:05:06.249 00:05:06.249 real 0m0.389s 00:05:06.249 user 0m0.350s 00:05:06.249 sys 0m0.030s 00:05:06.249 01:25:14 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:06.249 01:25:14 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:06.249 ************************************ 00:05:06.249 END TEST env_memory 00:05:06.249 ************************************ 00:05:06.249 01:25:14 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.249 01:25:14 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:06.249 01:25:14 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:06.249 01:25:14 env -- common/autotest_common.sh@10 -- # set +x 00:05:06.249 ************************************ 00:05:06.249 START TEST env_vtophys 00:05:06.249 ************************************ 00:05:06.249 01:25:14 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:06.249 EAL: lib.eal log level changed from notice to debug 00:05:06.249 EAL: Detected lcore 0 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 1 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 2 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 3 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 4 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 5 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 6 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 7 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 8 as core 0 on socket 0 00:05:06.249 EAL: Detected lcore 9 as core 0 on socket 0 00:05:06.249 EAL: Maximum logical cores by configuration: 128 00:05:06.249 EAL: Detected CPU lcores: 10 00:05:06.249 EAL: Detected NUMA nodes: 1 00:05:06.249 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:06.249 EAL: Detected shared linkage of DPDK 00:05:06.249 EAL: No shared files mode enabled, IPC will be disabled 00:05:06.249 EAL: Selected IOVA mode 'PA' 00:05:06.249 EAL: Probing VFIO support... 00:05:06.249 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.249 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:06.249 EAL: Ask a virtual area of 0x2e000 bytes 00:05:06.249 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:06.249 EAL: Setting up physically contiguous memory... 00:05:06.249 EAL: Setting maximum number of open files to 524288 00:05:06.249 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:06.249 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:06.249 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.249 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:06.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.249 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.249 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:06.249 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:06.249 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.249 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:06.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.249 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.249 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:06.249 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:06.249 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.249 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:06.249 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.249 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.249 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:06.249 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:06.249 EAL: Ask a virtual area of 0x61000 bytes 00:05:06.249 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:06.250 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:06.250 EAL: Ask a virtual area of 0x400000000 bytes 00:05:06.250 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:06.250 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:06.250 EAL: Hugepages will be freed exactly as allocated. 00:05:06.250 EAL: No shared files mode enabled, IPC is disabled 00:05:06.250 EAL: No shared files mode enabled, IPC is disabled 00:05:06.509 EAL: TSC frequency is ~2200000 KHz 00:05:06.509 EAL: Main lcore 0 is ready (tid=7ff37ac1ba40;cpuset=[0]) 00:05:06.509 EAL: Trying to obtain current memory policy. 00:05:06.509 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.509 EAL: Restoring previous memory policy: 0 00:05:06.509 EAL: request: mp_malloc_sync 00:05:06.509 EAL: No shared files mode enabled, IPC is disabled 00:05:06.509 EAL: Heap on socket 0 was expanded by 2MB 00:05:06.509 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:06.509 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:06.509 EAL: Mem event callback 'spdk:(nil)' registered 00:05:06.509 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:06.509 00:05:06.509 00:05:06.509 CUnit - A unit testing framework for C - Version 2.1-3 00:05:06.509 http://cunit.sourceforge.net/ 00:05:06.509 00:05:06.509 00:05:06.509 Suite: components_suite 00:05:06.770 Test: vtophys_malloc_test ...passed 00:05:06.770 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:06.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.770 EAL: Restoring previous memory policy: 4 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was expanded by 4MB 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was shrunk by 4MB 00:05:06.770 EAL: Trying to obtain current memory policy. 00:05:06.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.770 EAL: Restoring previous memory policy: 4 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was expanded by 6MB 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was shrunk by 6MB 00:05:06.770 EAL: Trying to obtain current memory policy. 00:05:06.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.770 EAL: Restoring previous memory policy: 4 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was expanded by 10MB 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was shrunk by 10MB 00:05:06.770 EAL: Trying to obtain current memory policy. 00:05:06.770 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:06.770 EAL: Restoring previous memory policy: 4 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was expanded by 18MB 00:05:06.770 EAL: Calling mem event callback 'spdk:(nil)' 00:05:06.770 EAL: request: mp_malloc_sync 00:05:06.770 EAL: No shared files mode enabled, IPC is disabled 00:05:06.770 EAL: Heap on socket 0 was shrunk by 18MB 00:05:07.029 EAL: Trying to obtain current memory policy. 00:05:07.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.029 EAL: Restoring previous memory policy: 4 00:05:07.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.029 EAL: request: mp_malloc_sync 00:05:07.029 EAL: No shared files mode enabled, IPC is disabled 00:05:07.029 EAL: Heap on socket 0 was expanded by 34MB 00:05:07.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.029 EAL: request: mp_malloc_sync 00:05:07.029 EAL: No shared files mode enabled, IPC is disabled 00:05:07.029 EAL: Heap on socket 0 was shrunk by 34MB 00:05:07.029 EAL: Trying to obtain current memory policy. 00:05:07.029 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.029 EAL: Restoring previous memory policy: 4 00:05:07.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.029 EAL: request: mp_malloc_sync 00:05:07.029 EAL: No shared files mode enabled, IPC is disabled 00:05:07.029 EAL: Heap on socket 0 was expanded by 66MB 00:05:07.029 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.029 EAL: request: mp_malloc_sync 00:05:07.029 EAL: No shared files mode enabled, IPC is disabled 00:05:07.029 EAL: Heap on socket 0 was shrunk by 66MB 00:05:07.289 EAL: Trying to obtain current memory policy. 00:05:07.289 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.289 EAL: Restoring previous memory policy: 4 00:05:07.289 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.289 EAL: request: mp_malloc_sync 00:05:07.289 EAL: No shared files mode enabled, IPC is disabled 00:05:07.289 EAL: Heap on socket 0 was expanded by 130MB 00:05:07.289 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.289 EAL: request: mp_malloc_sync 00:05:07.289 EAL: No shared files mode enabled, IPC is disabled 00:05:07.289 EAL: Heap on socket 0 was shrunk by 130MB 00:05:07.548 EAL: Trying to obtain current memory policy. 00:05:07.548 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:07.548 EAL: Restoring previous memory policy: 4 00:05:07.548 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.548 EAL: request: mp_malloc_sync 00:05:07.548 EAL: No shared files mode enabled, IPC is disabled 00:05:07.548 EAL: Heap on socket 0 was expanded by 258MB 00:05:07.808 EAL: Calling mem event callback 'spdk:(nil)' 00:05:07.808 EAL: request: mp_malloc_sync 00:05:07.808 EAL: No shared files mode enabled, IPC is disabled 00:05:07.808 EAL: Heap on socket 0 was shrunk by 258MB 00:05:08.067 EAL: Trying to obtain current memory policy. 00:05:08.067 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:08.326 EAL: Restoring previous memory policy: 4 00:05:08.326 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.326 EAL: request: mp_malloc_sync 00:05:08.326 EAL: No shared files mode enabled, IPC is disabled 00:05:08.326 EAL: Heap on socket 0 was expanded by 514MB 00:05:08.895 EAL: Calling mem event callback 'spdk:(nil)' 00:05:08.895 EAL: request: mp_malloc_sync 00:05:08.895 EAL: No shared files mode enabled, IPC is disabled 00:05:08.895 EAL: Heap on socket 0 was shrunk by 514MB 00:05:09.464 EAL: Trying to obtain current memory policy. 00:05:09.464 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:09.723 EAL: Restoring previous memory policy: 4 00:05:09.723 EAL: Calling mem event callback 'spdk:(nil)' 00:05:09.723 EAL: request: mp_malloc_sync 00:05:09.723 EAL: No shared files mode enabled, IPC is disabled 00:05:09.723 EAL: Heap on socket 0 was expanded by 1026MB 00:05:11.098 EAL: Calling mem event callback 'spdk:(nil)' 00:05:11.098 EAL: request: mp_malloc_sync 00:05:11.098 EAL: No shared files mode enabled, IPC is disabled 00:05:11.098 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:12.477 passed 00:05:12.477 00:05:12.477 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.477 suites 1 1 n/a 0 0 00:05:12.477 tests 2 2 2 0 0 00:05:12.477 asserts 5747 5747 5747 0 n/a 00:05:12.477 00:05:12.477 Elapsed time = 5.668 seconds 00:05:12.477 EAL: Calling mem event callback 'spdk:(nil)' 00:05:12.477 EAL: request: mp_malloc_sync 00:05:12.477 EAL: No shared files mode enabled, IPC is disabled 00:05:12.477 EAL: Heap on socket 0 was shrunk by 2MB 00:05:12.477 EAL: No shared files mode enabled, IPC is disabled 00:05:12.477 EAL: No shared files mode enabled, IPC is disabled 00:05:12.477 EAL: No shared files mode enabled, IPC is disabled 00:05:12.477 00:05:12.477 real 0m5.990s 00:05:12.477 user 0m5.211s 00:05:12.477 sys 0m0.631s 00:05:12.477 01:25:20 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.477 01:25:20 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 ************************************ 00:05:12.477 END TEST env_vtophys 00:05:12.477 ************************************ 00:05:12.477 01:25:20 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.477 01:25:20 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.477 01:25:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.477 01:25:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 ************************************ 00:05:12.477 START TEST env_pci 00:05:12.477 ************************************ 00:05:12.477 01:25:20 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:12.477 00:05:12.477 00:05:12.477 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.477 http://cunit.sourceforge.net/ 00:05:12.477 00:05:12.477 00:05:12.477 Suite: pci 00:05:12.477 Test: pci_hook ...[2024-11-17 01:25:20.657018] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 57286 has claimed it 00:05:12.477 passed 00:05:12.477 00:05:12.477 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.477 suites 1 1 n/a 0 0 00:05:12.477 tests 1 1 1 0 0 00:05:12.477 asserts 25 25 25 0 n/a 00:05:12.477 00:05:12.477 Elapsed time = 0.006 seconds 00:05:12.477 EAL: Cannot find device (10000:00:01.0) 00:05:12.477 EAL: Failed to attach device on primary process 00:05:12.477 00:05:12.477 real 0m0.075s 00:05:12.477 user 0m0.032s 00:05:12.477 sys 0m0.042s 00:05:12.477 01:25:20 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.477 01:25:20 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 ************************************ 00:05:12.477 END TEST env_pci 00:05:12.477 ************************************ 00:05:12.477 01:25:20 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:12.477 01:25:20 env -- env/env.sh@15 -- # uname 00:05:12.477 01:25:20 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:12.477 01:25:20 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:12.477 01:25:20 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.477 01:25:20 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:12.477 01:25:20 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.477 01:25:20 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.477 ************************************ 00:05:12.477 START TEST env_dpdk_post_init 00:05:12.477 ************************************ 00:05:12.477 01:25:20 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:12.477 EAL: Detected CPU lcores: 10 00:05:12.477 EAL: Detected NUMA nodes: 1 00:05:12.477 EAL: Detected shared linkage of DPDK 00:05:12.477 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.477 EAL: Selected IOVA mode 'PA' 00:05:12.736 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.736 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:12.736 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:12.736 Starting DPDK initialization... 00:05:12.736 Starting SPDK post initialization... 00:05:12.736 SPDK NVMe probe 00:05:12.736 Attaching to 0000:00:10.0 00:05:12.736 Attaching to 0000:00:11.0 00:05:12.736 Attached to 0000:00:10.0 00:05:12.736 Attached to 0000:00:11.0 00:05:12.736 Cleaning up... 00:05:12.736 00:05:12.736 real 0m0.280s 00:05:12.736 user 0m0.095s 00:05:12.736 sys 0m0.085s 00:05:12.736 01:25:21 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.736 01:25:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 ************************************ 00:05:12.736 END TEST env_dpdk_post_init 00:05:12.736 ************************************ 00:05:12.736 01:25:21 env -- env/env.sh@26 -- # uname 00:05:12.736 01:25:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:12.736 01:25:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.736 01:25:21 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.736 01:25:21 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.736 01:25:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.736 ************************************ 00:05:12.736 START TEST env_mem_callbacks 00:05:12.736 ************************************ 00:05:12.736 01:25:21 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:12.736 EAL: Detected CPU lcores: 10 00:05:12.736 EAL: Detected NUMA nodes: 1 00:05:12.736 EAL: Detected shared linkage of DPDK 00:05:12.736 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:12.736 EAL: Selected IOVA mode 'PA' 00:05:12.996 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:12.996 00:05:12.996 00:05:12.996 CUnit - A unit testing framework for C - Version 2.1-3 00:05:12.996 http://cunit.sourceforge.net/ 00:05:12.996 00:05:12.996 00:05:12.996 Suite: memory 00:05:12.996 Test: test ... 00:05:12.996 register 0x200000200000 2097152 00:05:12.996 malloc 3145728 00:05:12.996 register 0x200000400000 4194304 00:05:12.996 buf 0x2000004fffc0 len 3145728 PASSED 00:05:12.996 malloc 64 00:05:12.996 buf 0x2000004ffec0 len 64 PASSED 00:05:12.996 malloc 4194304 00:05:12.996 register 0x200000800000 6291456 00:05:12.996 buf 0x2000009fffc0 len 4194304 PASSED 00:05:12.996 free 0x2000004fffc0 3145728 00:05:12.996 free 0x2000004ffec0 64 00:05:12.996 unregister 0x200000400000 4194304 PASSED 00:05:12.996 free 0x2000009fffc0 4194304 00:05:12.996 unregister 0x200000800000 6291456 PASSED 00:05:12.996 malloc 8388608 00:05:12.996 register 0x200000400000 10485760 00:05:12.996 buf 0x2000005fffc0 len 8388608 PASSED 00:05:12.996 free 0x2000005fffc0 8388608 00:05:12.996 unregister 0x200000400000 10485760 PASSED 00:05:12.996 passed 00:05:12.996 00:05:12.996 Run Summary: Type Total Ran Passed Failed Inactive 00:05:12.996 suites 1 1 n/a 0 0 00:05:12.996 tests 1 1 1 0 0 00:05:12.996 asserts 15 15 15 0 n/a 00:05:12.996 00:05:12.996 Elapsed time = 0.073 seconds 00:05:12.996 00:05:12.996 real 0m0.275s 00:05:12.996 user 0m0.106s 00:05:12.996 sys 0m0.067s 00:05:12.996 01:25:21 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.996 01:25:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 ************************************ 00:05:12.996 END TEST env_mem_callbacks 00:05:12.996 ************************************ 00:05:12.996 00:05:12.996 real 0m7.471s 00:05:12.996 user 0m5.991s 00:05:12.996 sys 0m1.112s 00:05:12.996 01:25:21 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:12.996 01:25:21 env -- common/autotest_common.sh@10 -- # set +x 00:05:12.996 ************************************ 00:05:12.996 END TEST env 00:05:12.996 ************************************ 00:05:12.996 01:25:21 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:12.996 01:25:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:12.996 01:25:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:12.996 01:25:21 -- common/autotest_common.sh@10 -- # set +x 00:05:13.255 ************************************ 00:05:13.255 START TEST rpc 00:05:13.255 ************************************ 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:13.255 * Looking for test storage... 00:05:13.255 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:13.255 01:25:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:13.255 01:25:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:13.255 01:25:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:13.255 01:25:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:13.255 01:25:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:13.255 01:25:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:13.255 01:25:21 rpc -- scripts/common.sh@345 -- # : 1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:13.255 01:25:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:13.255 01:25:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@353 -- # local d=1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:13.255 01:25:21 rpc -- scripts/common.sh@355 -- # echo 1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:13.255 01:25:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@353 -- # local d=2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:13.255 01:25:21 rpc -- scripts/common.sh@355 -- # echo 2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:13.255 01:25:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:13.255 01:25:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:13.255 01:25:21 rpc -- scripts/common.sh@368 -- # return 0 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:13.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.255 --rc genhtml_branch_coverage=1 00:05:13.255 --rc genhtml_function_coverage=1 00:05:13.255 --rc genhtml_legend=1 00:05:13.255 --rc geninfo_all_blocks=1 00:05:13.255 --rc geninfo_unexecuted_blocks=1 00:05:13.255 00:05:13.255 ' 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:13.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.255 --rc genhtml_branch_coverage=1 00:05:13.255 --rc genhtml_function_coverage=1 00:05:13.255 --rc genhtml_legend=1 00:05:13.255 --rc geninfo_all_blocks=1 00:05:13.255 --rc geninfo_unexecuted_blocks=1 00:05:13.255 00:05:13.255 ' 00:05:13.255 01:25:21 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:13.255 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.255 --rc genhtml_branch_coverage=1 00:05:13.255 --rc genhtml_function_coverage=1 00:05:13.255 --rc genhtml_legend=1 00:05:13.255 --rc geninfo_all_blocks=1 00:05:13.255 --rc geninfo_unexecuted_blocks=1 00:05:13.255 00:05:13.255 ' 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:13.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:13.256 --rc genhtml_branch_coverage=1 00:05:13.256 --rc genhtml_function_coverage=1 00:05:13.256 --rc genhtml_legend=1 00:05:13.256 --rc geninfo_all_blocks=1 00:05:13.256 --rc geninfo_unexecuted_blocks=1 00:05:13.256 00:05:13.256 ' 00:05:13.256 01:25:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=57413 00:05:13.256 01:25:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:13.256 01:25:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 57413 00:05:13.256 01:25:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@835 -- # '[' -z 57413 ']' 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.256 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.256 01:25:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.513 [2024-11-17 01:25:21.780547] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:13.513 [2024-11-17 01:25:21.780719] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57413 ] 00:05:13.513 [2024-11-17 01:25:21.967849] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.771 [2024-11-17 01:25:22.077743] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:13.771 [2024-11-17 01:25:22.077849] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 57413' to capture a snapshot of events at runtime. 00:05:13.771 [2024-11-17 01:25:22.077866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:13.771 [2024-11-17 01:25:22.077879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:13.771 [2024-11-17 01:25:22.077889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid57413 for offline analysis/debug. 00:05:13.771 [2024-11-17 01:25:22.078999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.029 [2024-11-17 01:25:22.265190] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:14.289 01:25:22 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:14.289 01:25:22 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:14.289 01:25:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.289 01:25:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:14.289 01:25:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:14.289 01:25:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:14.289 01:25:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.289 01:25:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.289 01:25:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.289 ************************************ 00:05:14.289 START TEST rpc_integrity 00:05:14.289 ************************************ 00:05:14.289 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:14.549 { 00:05:14.549 "name": "Malloc0", 00:05:14.549 "aliases": [ 00:05:14.549 "ca3bbecc-3120-4e0b-a11c-0099e61e6a29" 00:05:14.549 ], 00:05:14.549 "product_name": "Malloc disk", 00:05:14.549 "block_size": 512, 00:05:14.549 "num_blocks": 16384, 00:05:14.549 "uuid": "ca3bbecc-3120-4e0b-a11c-0099e61e6a29", 00:05:14.549 "assigned_rate_limits": { 00:05:14.549 "rw_ios_per_sec": 0, 00:05:14.549 "rw_mbytes_per_sec": 0, 00:05:14.549 "r_mbytes_per_sec": 0, 00:05:14.549 "w_mbytes_per_sec": 0 00:05:14.549 }, 00:05:14.549 "claimed": false, 00:05:14.549 "zoned": false, 00:05:14.549 "supported_io_types": { 00:05:14.549 "read": true, 00:05:14.549 "write": true, 00:05:14.549 "unmap": true, 00:05:14.549 "flush": true, 00:05:14.549 "reset": true, 00:05:14.549 "nvme_admin": false, 00:05:14.549 "nvme_io": false, 00:05:14.549 "nvme_io_md": false, 00:05:14.549 "write_zeroes": true, 00:05:14.549 "zcopy": true, 00:05:14.549 "get_zone_info": false, 00:05:14.549 "zone_management": false, 00:05:14.549 "zone_append": false, 00:05:14.549 "compare": false, 00:05:14.549 "compare_and_write": false, 00:05:14.549 "abort": true, 00:05:14.549 "seek_hole": false, 00:05:14.549 "seek_data": false, 00:05:14.549 "copy": true, 00:05:14.549 "nvme_iov_md": false 00:05:14.549 }, 00:05:14.549 "memory_domains": [ 00:05:14.549 { 00:05:14.549 "dma_device_id": "system", 00:05:14.549 "dma_device_type": 1 00:05:14.549 }, 00:05:14.549 { 00:05:14.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.549 "dma_device_type": 2 00:05:14.549 } 00:05:14.549 ], 00:05:14.549 "driver_specific": {} 00:05:14.549 } 00:05:14.549 ]' 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.549 [2024-11-17 01:25:22.914941] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:14.549 [2024-11-17 01:25:22.915169] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:14.549 [2024-11-17 01:25:22.915212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007b80 00:05:14.549 [2024-11-17 01:25:22.915229] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:14.549 [2024-11-17 01:25:22.917934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:14.549 [2024-11-17 01:25:22.917976] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:14.549 Passthru0 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.549 01:25:22 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.549 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:14.549 { 00:05:14.549 "name": "Malloc0", 00:05:14.549 "aliases": [ 00:05:14.549 "ca3bbecc-3120-4e0b-a11c-0099e61e6a29" 00:05:14.549 ], 00:05:14.549 "product_name": "Malloc disk", 00:05:14.549 "block_size": 512, 00:05:14.549 "num_blocks": 16384, 00:05:14.549 "uuid": "ca3bbecc-3120-4e0b-a11c-0099e61e6a29", 00:05:14.549 "assigned_rate_limits": { 00:05:14.549 "rw_ios_per_sec": 0, 00:05:14.549 "rw_mbytes_per_sec": 0, 00:05:14.549 "r_mbytes_per_sec": 0, 00:05:14.549 "w_mbytes_per_sec": 0 00:05:14.549 }, 00:05:14.549 "claimed": true, 00:05:14.549 "claim_type": "exclusive_write", 00:05:14.549 "zoned": false, 00:05:14.549 "supported_io_types": { 00:05:14.549 "read": true, 00:05:14.549 "write": true, 00:05:14.549 "unmap": true, 00:05:14.549 "flush": true, 00:05:14.549 "reset": true, 00:05:14.549 "nvme_admin": false, 00:05:14.549 "nvme_io": false, 00:05:14.549 "nvme_io_md": false, 00:05:14.549 "write_zeroes": true, 00:05:14.549 "zcopy": true, 00:05:14.549 "get_zone_info": false, 00:05:14.549 "zone_management": false, 00:05:14.549 "zone_append": false, 00:05:14.549 "compare": false, 00:05:14.549 "compare_and_write": false, 00:05:14.549 "abort": true, 00:05:14.549 "seek_hole": false, 00:05:14.549 "seek_data": false, 00:05:14.549 "copy": true, 00:05:14.549 "nvme_iov_md": false 00:05:14.549 }, 00:05:14.549 "memory_domains": [ 00:05:14.549 { 00:05:14.549 "dma_device_id": "system", 00:05:14.549 "dma_device_type": 1 00:05:14.549 }, 00:05:14.549 { 00:05:14.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.549 "dma_device_type": 2 00:05:14.549 } 00:05:14.549 ], 00:05:14.549 "driver_specific": {} 00:05:14.549 }, 00:05:14.549 { 00:05:14.549 "name": "Passthru0", 00:05:14.549 "aliases": [ 00:05:14.549 "26f1975b-099d-5850-b08b-55e56462212f" 00:05:14.549 ], 00:05:14.549 "product_name": "passthru", 00:05:14.549 "block_size": 512, 00:05:14.549 "num_blocks": 16384, 00:05:14.549 "uuid": "26f1975b-099d-5850-b08b-55e56462212f", 00:05:14.549 "assigned_rate_limits": { 00:05:14.549 "rw_ios_per_sec": 0, 00:05:14.549 "rw_mbytes_per_sec": 0, 00:05:14.549 "r_mbytes_per_sec": 0, 00:05:14.549 "w_mbytes_per_sec": 0 00:05:14.550 }, 00:05:14.550 "claimed": false, 00:05:14.550 "zoned": false, 00:05:14.550 "supported_io_types": { 00:05:14.550 "read": true, 00:05:14.550 "write": true, 00:05:14.550 "unmap": true, 00:05:14.550 "flush": true, 00:05:14.550 "reset": true, 00:05:14.550 "nvme_admin": false, 00:05:14.550 "nvme_io": false, 00:05:14.550 "nvme_io_md": false, 00:05:14.550 "write_zeroes": true, 00:05:14.550 "zcopy": true, 00:05:14.550 "get_zone_info": false, 00:05:14.550 "zone_management": false, 00:05:14.550 "zone_append": false, 00:05:14.550 "compare": false, 00:05:14.550 "compare_and_write": false, 00:05:14.550 "abort": true, 00:05:14.550 "seek_hole": false, 00:05:14.550 "seek_data": false, 00:05:14.550 "copy": true, 00:05:14.550 "nvme_iov_md": false 00:05:14.550 }, 00:05:14.550 "memory_domains": [ 00:05:14.550 { 00:05:14.550 "dma_device_id": "system", 00:05:14.550 "dma_device_type": 1 00:05:14.550 }, 00:05:14.550 { 00:05:14.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.550 "dma_device_type": 2 00:05:14.550 } 00:05:14.550 ], 00:05:14.550 "driver_specific": { 00:05:14.550 "passthru": { 00:05:14.550 "name": "Passthru0", 00:05:14.550 "base_bdev_name": "Malloc0" 00:05:14.550 } 00:05:14.550 } 00:05:14.550 } 00:05:14.550 ]' 00:05:14.550 01:25:22 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:14.550 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:14.550 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:14.550 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.550 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.810 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.810 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.810 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:14.810 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:14.810 ************************************ 00:05:14.810 END TEST rpc_integrity 00:05:14.810 ************************************ 00:05:14.810 01:25:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:14.810 00:05:14.810 real 0m0.355s 00:05:14.810 user 0m0.220s 00:05:14.810 sys 0m0.041s 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:14.810 01:25:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.810 01:25:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.810 01:25:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 ************************************ 00:05:14.810 START TEST rpc_plugins 00:05:14.810 ************************************ 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:14.810 { 00:05:14.810 "name": "Malloc1", 00:05:14.810 "aliases": [ 00:05:14.810 "bc77ab73-abae-4ec3-a00a-9ffafc757572" 00:05:14.810 ], 00:05:14.810 "product_name": "Malloc disk", 00:05:14.810 "block_size": 4096, 00:05:14.810 "num_blocks": 256, 00:05:14.810 "uuid": "bc77ab73-abae-4ec3-a00a-9ffafc757572", 00:05:14.810 "assigned_rate_limits": { 00:05:14.810 "rw_ios_per_sec": 0, 00:05:14.810 "rw_mbytes_per_sec": 0, 00:05:14.810 "r_mbytes_per_sec": 0, 00:05:14.810 "w_mbytes_per_sec": 0 00:05:14.810 }, 00:05:14.810 "claimed": false, 00:05:14.810 "zoned": false, 00:05:14.810 "supported_io_types": { 00:05:14.810 "read": true, 00:05:14.810 "write": true, 00:05:14.810 "unmap": true, 00:05:14.810 "flush": true, 00:05:14.810 "reset": true, 00:05:14.810 "nvme_admin": false, 00:05:14.810 "nvme_io": false, 00:05:14.810 "nvme_io_md": false, 00:05:14.810 "write_zeroes": true, 00:05:14.810 "zcopy": true, 00:05:14.810 "get_zone_info": false, 00:05:14.810 "zone_management": false, 00:05:14.810 "zone_append": false, 00:05:14.810 "compare": false, 00:05:14.810 "compare_and_write": false, 00:05:14.810 "abort": true, 00:05:14.810 "seek_hole": false, 00:05:14.810 "seek_data": false, 00:05:14.810 "copy": true, 00:05:14.810 "nvme_iov_md": false 00:05:14.810 }, 00:05:14.810 "memory_domains": [ 00:05:14.810 { 00:05:14.810 "dma_device_id": "system", 00:05:14.810 "dma_device_type": 1 00:05:14.810 }, 00:05:14.810 { 00:05:14.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:14.810 "dma_device_type": 2 00:05:14.810 } 00:05:14.810 ], 00:05:14.810 "driver_specific": {} 00:05:14.810 } 00:05:14.810 ]' 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:14.810 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:14.810 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.070 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.070 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:15.070 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:15.070 ************************************ 00:05:15.070 END TEST rpc_plugins 00:05:15.070 ************************************ 00:05:15.070 01:25:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:15.070 00:05:15.070 real 0m0.170s 00:05:15.070 user 0m0.111s 00:05:15.070 sys 0m0.019s 00:05:15.070 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.070 01:25:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:15.070 01:25:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:15.070 01:25:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.070 01:25:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.070 01:25:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.070 ************************************ 00:05:15.070 START TEST rpc_trace_cmd_test 00:05:15.070 ************************************ 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:15.070 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid57413", 00:05:15.070 "tpoint_group_mask": "0x8", 00:05:15.070 "iscsi_conn": { 00:05:15.070 "mask": "0x2", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "scsi": { 00:05:15.070 "mask": "0x4", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "bdev": { 00:05:15.070 "mask": "0x8", 00:05:15.070 "tpoint_mask": "0xffffffffffffffff" 00:05:15.070 }, 00:05:15.070 "nvmf_rdma": { 00:05:15.070 "mask": "0x10", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "nvmf_tcp": { 00:05:15.070 "mask": "0x20", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "ftl": { 00:05:15.070 "mask": "0x40", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "blobfs": { 00:05:15.070 "mask": "0x80", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "dsa": { 00:05:15.070 "mask": "0x200", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "thread": { 00:05:15.070 "mask": "0x400", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "nvme_pcie": { 00:05:15.070 "mask": "0x800", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "iaa": { 00:05:15.070 "mask": "0x1000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "nvme_tcp": { 00:05:15.070 "mask": "0x2000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "bdev_nvme": { 00:05:15.070 "mask": "0x4000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "sock": { 00:05:15.070 "mask": "0x8000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "blob": { 00:05:15.070 "mask": "0x10000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "bdev_raid": { 00:05:15.070 "mask": "0x20000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 }, 00:05:15.070 "scheduler": { 00:05:15.070 "mask": "0x40000", 00:05:15.070 "tpoint_mask": "0x0" 00:05:15.070 } 00:05:15.070 }' 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:15.070 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:15.330 ************************************ 00:05:15.330 END TEST rpc_trace_cmd_test 00:05:15.330 ************************************ 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:15.330 00:05:15.330 real 0m0.290s 00:05:15.330 user 0m0.254s 00:05:15.330 sys 0m0.024s 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.330 01:25:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:15.330 01:25:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:15.330 01:25:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:15.330 01:25:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:15.330 01:25:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:15.330 01:25:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:15.330 01:25:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.330 ************************************ 00:05:15.330 START TEST rpc_daemon_integrity 00:05:15.330 ************************************ 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.330 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:15.590 { 00:05:15.590 "name": "Malloc2", 00:05:15.590 "aliases": [ 00:05:15.590 "1325cb05-530e-45b9-bb31-e5ab48a5a48c" 00:05:15.590 ], 00:05:15.590 "product_name": "Malloc disk", 00:05:15.590 "block_size": 512, 00:05:15.590 "num_blocks": 16384, 00:05:15.590 "uuid": "1325cb05-530e-45b9-bb31-e5ab48a5a48c", 00:05:15.590 "assigned_rate_limits": { 00:05:15.590 "rw_ios_per_sec": 0, 00:05:15.590 "rw_mbytes_per_sec": 0, 00:05:15.590 "r_mbytes_per_sec": 0, 00:05:15.590 "w_mbytes_per_sec": 0 00:05:15.590 }, 00:05:15.590 "claimed": false, 00:05:15.590 "zoned": false, 00:05:15.590 "supported_io_types": { 00:05:15.590 "read": true, 00:05:15.590 "write": true, 00:05:15.590 "unmap": true, 00:05:15.590 "flush": true, 00:05:15.590 "reset": true, 00:05:15.590 "nvme_admin": false, 00:05:15.590 "nvme_io": false, 00:05:15.590 "nvme_io_md": false, 00:05:15.590 "write_zeroes": true, 00:05:15.590 "zcopy": true, 00:05:15.590 "get_zone_info": false, 00:05:15.590 "zone_management": false, 00:05:15.590 "zone_append": false, 00:05:15.590 "compare": false, 00:05:15.590 "compare_and_write": false, 00:05:15.590 "abort": true, 00:05:15.590 "seek_hole": false, 00:05:15.590 "seek_data": false, 00:05:15.590 "copy": true, 00:05:15.590 "nvme_iov_md": false 00:05:15.590 }, 00:05:15.590 "memory_domains": [ 00:05:15.590 { 00:05:15.590 "dma_device_id": "system", 00:05:15.590 "dma_device_type": 1 00:05:15.590 }, 00:05:15.590 { 00:05:15.590 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.590 "dma_device_type": 2 00:05:15.590 } 00:05:15.590 ], 00:05:15.590 "driver_specific": {} 00:05:15.590 } 00:05:15.590 ]' 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 [2024-11-17 01:25:23.882853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:15.590 [2024-11-17 01:25:23.882929] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:15.590 [2024-11-17 01:25:23.882977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008d80 00:05:15.590 [2024-11-17 01:25:23.882994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:15.590 [2024-11-17 01:25:23.885785] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:15.590 [2024-11-17 01:25:23.886025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:15.590 Passthru0 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.590 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:15.590 { 00:05:15.590 "name": "Malloc2", 00:05:15.590 "aliases": [ 00:05:15.590 "1325cb05-530e-45b9-bb31-e5ab48a5a48c" 00:05:15.590 ], 00:05:15.590 "product_name": "Malloc disk", 00:05:15.590 "block_size": 512, 00:05:15.590 "num_blocks": 16384, 00:05:15.590 "uuid": "1325cb05-530e-45b9-bb31-e5ab48a5a48c", 00:05:15.590 "assigned_rate_limits": { 00:05:15.590 "rw_ios_per_sec": 0, 00:05:15.590 "rw_mbytes_per_sec": 0, 00:05:15.590 "r_mbytes_per_sec": 0, 00:05:15.590 "w_mbytes_per_sec": 0 00:05:15.590 }, 00:05:15.590 "claimed": true, 00:05:15.590 "claim_type": "exclusive_write", 00:05:15.590 "zoned": false, 00:05:15.590 "supported_io_types": { 00:05:15.590 "read": true, 00:05:15.590 "write": true, 00:05:15.590 "unmap": true, 00:05:15.590 "flush": true, 00:05:15.590 "reset": true, 00:05:15.590 "nvme_admin": false, 00:05:15.590 "nvme_io": false, 00:05:15.590 "nvme_io_md": false, 00:05:15.590 "write_zeroes": true, 00:05:15.590 "zcopy": true, 00:05:15.590 "get_zone_info": false, 00:05:15.590 "zone_management": false, 00:05:15.590 "zone_append": false, 00:05:15.590 "compare": false, 00:05:15.590 "compare_and_write": false, 00:05:15.590 "abort": true, 00:05:15.590 "seek_hole": false, 00:05:15.590 "seek_data": false, 00:05:15.590 "copy": true, 00:05:15.590 "nvme_iov_md": false 00:05:15.590 }, 00:05:15.590 "memory_domains": [ 00:05:15.590 { 00:05:15.590 "dma_device_id": "system", 00:05:15.591 "dma_device_type": 1 00:05:15.591 }, 00:05:15.591 { 00:05:15.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.591 "dma_device_type": 2 00:05:15.591 } 00:05:15.591 ], 00:05:15.591 "driver_specific": {} 00:05:15.591 }, 00:05:15.591 { 00:05:15.591 "name": "Passthru0", 00:05:15.591 "aliases": [ 00:05:15.591 "68bdd143-771d-5ebc-89fa-f6effe2198d4" 00:05:15.591 ], 00:05:15.591 "product_name": "passthru", 00:05:15.591 "block_size": 512, 00:05:15.591 "num_blocks": 16384, 00:05:15.591 "uuid": "68bdd143-771d-5ebc-89fa-f6effe2198d4", 00:05:15.591 "assigned_rate_limits": { 00:05:15.591 "rw_ios_per_sec": 0, 00:05:15.591 "rw_mbytes_per_sec": 0, 00:05:15.591 "r_mbytes_per_sec": 0, 00:05:15.591 "w_mbytes_per_sec": 0 00:05:15.591 }, 00:05:15.591 "claimed": false, 00:05:15.591 "zoned": false, 00:05:15.591 "supported_io_types": { 00:05:15.591 "read": true, 00:05:15.591 "write": true, 00:05:15.591 "unmap": true, 00:05:15.591 "flush": true, 00:05:15.591 "reset": true, 00:05:15.591 "nvme_admin": false, 00:05:15.591 "nvme_io": false, 00:05:15.591 "nvme_io_md": false, 00:05:15.591 "write_zeroes": true, 00:05:15.591 "zcopy": true, 00:05:15.591 "get_zone_info": false, 00:05:15.591 "zone_management": false, 00:05:15.591 "zone_append": false, 00:05:15.591 "compare": false, 00:05:15.591 "compare_and_write": false, 00:05:15.591 "abort": true, 00:05:15.591 "seek_hole": false, 00:05:15.591 "seek_data": false, 00:05:15.591 "copy": true, 00:05:15.591 "nvme_iov_md": false 00:05:15.591 }, 00:05:15.591 "memory_domains": [ 00:05:15.591 { 00:05:15.591 "dma_device_id": "system", 00:05:15.591 "dma_device_type": 1 00:05:15.591 }, 00:05:15.591 { 00:05:15.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:15.591 "dma_device_type": 2 00:05:15.591 } 00:05:15.591 ], 00:05:15.591 "driver_specific": { 00:05:15.591 "passthru": { 00:05:15.591 "name": "Passthru0", 00:05:15.591 "base_bdev_name": "Malloc2" 00:05:15.591 } 00:05:15.591 } 00:05:15.591 } 00:05:15.591 ]' 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.591 01:25:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:15.591 01:25:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:15.851 ************************************ 00:05:15.851 END TEST rpc_daemon_integrity 00:05:15.851 ************************************ 00:05:15.851 01:25:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:15.851 00:05:15.851 real 0m0.350s 00:05:15.851 user 0m0.225s 00:05:15.851 sys 0m0.038s 00:05:15.851 01:25:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:15.851 01:25:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:15.851 01:25:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:15.851 01:25:24 rpc -- rpc/rpc.sh@84 -- # killprocess 57413 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 57413 ']' 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@958 -- # kill -0 57413 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@959 -- # uname 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57413 00:05:15.851 killing process with pid 57413 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57413' 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@973 -- # kill 57413 00:05:15.851 01:25:24 rpc -- common/autotest_common.sh@978 -- # wait 57413 00:05:17.758 00:05:17.758 real 0m4.403s 00:05:17.758 user 0m5.273s 00:05:17.758 sys 0m0.736s 00:05:17.758 01:25:25 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:17.758 01:25:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.758 ************************************ 00:05:17.758 END TEST rpc 00:05:17.758 ************************************ 00:05:17.758 01:25:25 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:17.758 01:25:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.758 01:25:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.758 01:25:25 -- common/autotest_common.sh@10 -- # set +x 00:05:17.758 ************************************ 00:05:17.758 START TEST skip_rpc 00:05:17.758 ************************************ 00:05:17.758 01:25:25 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:17.758 * Looking for test storage... 00:05:17.758 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:17.758 01:25:25 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:17.758 01:25:25 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:17.758 01:25:25 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:17.758 01:25:26 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:17.758 01:25:26 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:17.758 01:25:26 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:17.758 01:25:26 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:17.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.758 --rc genhtml_branch_coverage=1 00:05:17.758 --rc genhtml_function_coverage=1 00:05:17.758 --rc genhtml_legend=1 00:05:17.758 --rc geninfo_all_blocks=1 00:05:17.758 --rc geninfo_unexecuted_blocks=1 00:05:17.758 00:05:17.758 ' 00:05:17.758 01:25:26 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:17.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.758 --rc genhtml_branch_coverage=1 00:05:17.758 --rc genhtml_function_coverage=1 00:05:17.758 --rc genhtml_legend=1 00:05:17.758 --rc geninfo_all_blocks=1 00:05:17.758 --rc geninfo_unexecuted_blocks=1 00:05:17.758 00:05:17.758 ' 00:05:17.758 01:25:26 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:17.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.758 --rc genhtml_branch_coverage=1 00:05:17.758 --rc genhtml_function_coverage=1 00:05:17.758 --rc genhtml_legend=1 00:05:17.758 --rc geninfo_all_blocks=1 00:05:17.758 --rc geninfo_unexecuted_blocks=1 00:05:17.758 00:05:17.758 ' 00:05:17.758 01:25:26 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:17.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:17.758 --rc genhtml_branch_coverage=1 00:05:17.758 --rc genhtml_function_coverage=1 00:05:17.758 --rc genhtml_legend=1 00:05:17.759 --rc geninfo_all_blocks=1 00:05:17.759 --rc geninfo_unexecuted_blocks=1 00:05:17.759 00:05:17.759 ' 00:05:17.759 01:25:26 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:17.759 01:25:26 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:17.759 01:25:26 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:17.759 01:25:26 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:17.759 01:25:26 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:17.759 01:25:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.759 ************************************ 00:05:17.759 START TEST skip_rpc 00:05:17.759 ************************************ 00:05:17.759 01:25:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:17.759 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57636 00:05:17.759 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:17.759 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:17.759 01:25:26 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:18.018 [2024-11-17 01:25:26.278693] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:18.018 [2024-11-17 01:25:26.279170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57636 ] 00:05:18.018 [2024-11-17 01:25:26.457660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.277 [2024-11-17 01:25:26.546960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.536 [2024-11-17 01:25:26.737812] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57636 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 57636 ']' 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 57636 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57636 00:05:22.727 killing process with pid 57636 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57636' 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 57636 00:05:22.727 01:25:31 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 57636 00:05:24.633 00:05:24.633 real 0m6.775s 00:05:24.633 user 0m6.340s 00:05:24.633 sys 0m0.333s 00:05:24.633 01:25:32 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.633 ************************************ 00:05:24.633 END TEST skip_rpc 00:05:24.633 ************************************ 00:05:24.633 01:25:32 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.633 01:25:32 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:24.633 01:25:32 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.633 01:25:32 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.633 01:25:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.633 ************************************ 00:05:24.633 START TEST skip_rpc_with_json 00:05:24.633 ************************************ 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:24.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57735 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57735 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57735 ']' 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.633 01:25:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:24.634 [2024-11-17 01:25:33.062219] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:24.634 [2024-11-17 01:25:33.062609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57735 ] 00:05:24.892 [2024-11-17 01:25:33.237672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.892 [2024-11-17 01:25:33.317737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.151 [2024-11-17 01:25:33.501574] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.719 [2024-11-17 01:25:33.981927] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:25.719 request: 00:05:25.719 { 00:05:25.719 "trtype": "tcp", 00:05:25.719 "method": "nvmf_get_transports", 00:05:25.719 "req_id": 1 00:05:25.719 } 00:05:25.719 Got JSON-RPC error response 00:05:25.719 response: 00:05:25.719 { 00:05:25.719 "code": -19, 00:05:25.719 "message": "No such device" 00:05:25.719 } 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.719 [2024-11-17 01:25:33.994060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:25.719 01:25:33 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:25.979 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:25.979 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:25.979 { 00:05:25.979 "subsystems": [ 00:05:25.979 { 00:05:25.979 "subsystem": "fsdev", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "fsdev_set_opts", 00:05:25.979 "params": { 00:05:25.979 "fsdev_io_pool_size": 65535, 00:05:25.979 "fsdev_io_cache_size": 256 00:05:25.979 } 00:05:25.979 } 00:05:25.979 ] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "vfio_user_target", 00:05:25.979 "config": null 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "keyring", 00:05:25.979 "config": [] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "iobuf", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "iobuf_set_options", 00:05:25.979 "params": { 00:05:25.979 "small_pool_count": 8192, 00:05:25.979 "large_pool_count": 1024, 00:05:25.979 "small_bufsize": 8192, 00:05:25.979 "large_bufsize": 135168, 00:05:25.979 "enable_numa": false 00:05:25.979 } 00:05:25.979 } 00:05:25.979 ] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "sock", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "sock_set_default_impl", 00:05:25.979 "params": { 00:05:25.979 "impl_name": "uring" 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "sock_impl_set_options", 00:05:25.979 "params": { 00:05:25.979 "impl_name": "ssl", 00:05:25.979 "recv_buf_size": 4096, 00:05:25.979 "send_buf_size": 4096, 00:05:25.979 "enable_recv_pipe": true, 00:05:25.979 "enable_quickack": false, 00:05:25.979 "enable_placement_id": 0, 00:05:25.979 "enable_zerocopy_send_server": true, 00:05:25.979 "enable_zerocopy_send_client": false, 00:05:25.979 "zerocopy_threshold": 0, 00:05:25.979 "tls_version": 0, 00:05:25.979 "enable_ktls": false 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "sock_impl_set_options", 00:05:25.979 "params": { 00:05:25.979 "impl_name": "posix", 00:05:25.979 "recv_buf_size": 2097152, 00:05:25.979 "send_buf_size": 2097152, 00:05:25.979 "enable_recv_pipe": true, 00:05:25.979 "enable_quickack": false, 00:05:25.979 "enable_placement_id": 0, 00:05:25.979 "enable_zerocopy_send_server": true, 00:05:25.979 "enable_zerocopy_send_client": false, 00:05:25.979 "zerocopy_threshold": 0, 00:05:25.979 "tls_version": 0, 00:05:25.979 "enable_ktls": false 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "sock_impl_set_options", 00:05:25.979 "params": { 00:05:25.979 "impl_name": "uring", 00:05:25.979 "recv_buf_size": 2097152, 00:05:25.979 "send_buf_size": 2097152, 00:05:25.979 "enable_recv_pipe": true, 00:05:25.979 "enable_quickack": false, 00:05:25.979 "enable_placement_id": 0, 00:05:25.979 "enable_zerocopy_send_server": false, 00:05:25.979 "enable_zerocopy_send_client": false, 00:05:25.979 "zerocopy_threshold": 0, 00:05:25.979 "tls_version": 0, 00:05:25.979 "enable_ktls": false 00:05:25.979 } 00:05:25.979 } 00:05:25.979 ] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "vmd", 00:05:25.979 "config": [] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "accel", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "accel_set_options", 00:05:25.979 "params": { 00:05:25.979 "small_cache_size": 128, 00:05:25.979 "large_cache_size": 16, 00:05:25.979 "task_count": 2048, 00:05:25.979 "sequence_count": 2048, 00:05:25.979 "buf_count": 2048 00:05:25.979 } 00:05:25.979 } 00:05:25.979 ] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "bdev", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "bdev_set_options", 00:05:25.979 "params": { 00:05:25.979 "bdev_io_pool_size": 65535, 00:05:25.979 "bdev_io_cache_size": 256, 00:05:25.979 "bdev_auto_examine": true, 00:05:25.979 "iobuf_small_cache_size": 128, 00:05:25.979 "iobuf_large_cache_size": 16 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "bdev_raid_set_options", 00:05:25.979 "params": { 00:05:25.979 "process_window_size_kb": 1024, 00:05:25.979 "process_max_bandwidth_mb_sec": 0 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "bdev_iscsi_set_options", 00:05:25.979 "params": { 00:05:25.979 "timeout_sec": 30 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "bdev_nvme_set_options", 00:05:25.979 "params": { 00:05:25.979 "action_on_timeout": "none", 00:05:25.979 "timeout_us": 0, 00:05:25.979 "timeout_admin_us": 0, 00:05:25.979 "keep_alive_timeout_ms": 10000, 00:05:25.979 "arbitration_burst": 0, 00:05:25.979 "low_priority_weight": 0, 00:05:25.979 "medium_priority_weight": 0, 00:05:25.979 "high_priority_weight": 0, 00:05:25.979 "nvme_adminq_poll_period_us": 10000, 00:05:25.979 "nvme_ioq_poll_period_us": 0, 00:05:25.979 "io_queue_requests": 0, 00:05:25.979 "delay_cmd_submit": true, 00:05:25.979 "transport_retry_count": 4, 00:05:25.979 "bdev_retry_count": 3, 00:05:25.979 "transport_ack_timeout": 0, 00:05:25.979 "ctrlr_loss_timeout_sec": 0, 00:05:25.979 "reconnect_delay_sec": 0, 00:05:25.979 "fast_io_fail_timeout_sec": 0, 00:05:25.979 "disable_auto_failback": false, 00:05:25.979 "generate_uuids": false, 00:05:25.979 "transport_tos": 0, 00:05:25.979 "nvme_error_stat": false, 00:05:25.979 "rdma_srq_size": 0, 00:05:25.979 "io_path_stat": false, 00:05:25.979 "allow_accel_sequence": false, 00:05:25.979 "rdma_max_cq_size": 0, 00:05:25.979 "rdma_cm_event_timeout_ms": 0, 00:05:25.979 "dhchap_digests": [ 00:05:25.979 "sha256", 00:05:25.979 "sha384", 00:05:25.979 "sha512" 00:05:25.979 ], 00:05:25.979 "dhchap_dhgroups": [ 00:05:25.979 "null", 00:05:25.979 "ffdhe2048", 00:05:25.979 "ffdhe3072", 00:05:25.979 "ffdhe4096", 00:05:25.979 "ffdhe6144", 00:05:25.979 "ffdhe8192" 00:05:25.979 ] 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "bdev_nvme_set_hotplug", 00:05:25.979 "params": { 00:05:25.979 "period_us": 100000, 00:05:25.979 "enable": false 00:05:25.979 } 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "method": "bdev_wait_for_examine" 00:05:25.979 } 00:05:25.979 ] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "scsi", 00:05:25.979 "config": null 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "scheduler", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "framework_set_scheduler", 00:05:25.979 "params": { 00:05:25.979 "name": "static" 00:05:25.979 } 00:05:25.979 } 00:05:25.979 ] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "vhost_scsi", 00:05:25.979 "config": [] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "vhost_blk", 00:05:25.979 "config": [] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "ublk", 00:05:25.979 "config": [] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "nbd", 00:05:25.979 "config": [] 00:05:25.979 }, 00:05:25.979 { 00:05:25.979 "subsystem": "nvmf", 00:05:25.979 "config": [ 00:05:25.979 { 00:05:25.979 "method": "nvmf_set_config", 00:05:25.979 "params": { 00:05:25.979 "discovery_filter": "match_any", 00:05:25.979 "admin_cmd_passthru": { 00:05:25.979 "identify_ctrlr": false 00:05:25.979 }, 00:05:25.979 "dhchap_digests": [ 00:05:25.979 "sha256", 00:05:25.979 "sha384", 00:05:25.979 "sha512" 00:05:25.979 ], 00:05:25.979 "dhchap_dhgroups": [ 00:05:25.980 "null", 00:05:25.980 "ffdhe2048", 00:05:25.980 "ffdhe3072", 00:05:25.980 "ffdhe4096", 00:05:25.980 "ffdhe6144", 00:05:25.980 "ffdhe8192" 00:05:25.980 ] 00:05:25.980 } 00:05:25.980 }, 00:05:25.980 { 00:05:25.980 "method": "nvmf_set_max_subsystems", 00:05:25.980 "params": { 00:05:25.980 "max_subsystems": 1024 00:05:25.980 } 00:05:25.980 }, 00:05:25.980 { 00:05:25.980 "method": "nvmf_set_crdt", 00:05:25.980 "params": { 00:05:25.980 "crdt1": 0, 00:05:25.980 "crdt2": 0, 00:05:25.980 "crdt3": 0 00:05:25.980 } 00:05:25.980 }, 00:05:25.980 { 00:05:25.980 "method": "nvmf_create_transport", 00:05:25.980 "params": { 00:05:25.980 "trtype": "TCP", 00:05:25.980 "max_queue_depth": 128, 00:05:25.980 "max_io_qpairs_per_ctrlr": 127, 00:05:25.980 "in_capsule_data_size": 4096, 00:05:25.980 "max_io_size": 131072, 00:05:25.980 "io_unit_size": 131072, 00:05:25.980 "max_aq_depth": 128, 00:05:25.980 "num_shared_buffers": 511, 00:05:25.980 "buf_cache_size": 4294967295, 00:05:25.980 "dif_insert_or_strip": false, 00:05:25.980 "zcopy": false, 00:05:25.980 "c2h_success": true, 00:05:25.980 "sock_priority": 0, 00:05:25.980 "abort_timeout_sec": 1, 00:05:25.980 "ack_timeout": 0, 00:05:25.980 "data_wr_pool_size": 0 00:05:25.980 } 00:05:25.980 } 00:05:25.980 ] 00:05:25.980 }, 00:05:25.980 { 00:05:25.980 "subsystem": "iscsi", 00:05:25.980 "config": [ 00:05:25.980 { 00:05:25.980 "method": "iscsi_set_options", 00:05:25.980 "params": { 00:05:25.980 "node_base": "iqn.2016-06.io.spdk", 00:05:25.980 "max_sessions": 128, 00:05:25.980 "max_connections_per_session": 2, 00:05:25.980 "max_queue_depth": 64, 00:05:25.980 "default_time2wait": 2, 00:05:25.980 "default_time2retain": 20, 00:05:25.980 "first_burst_length": 8192, 00:05:25.980 "immediate_data": true, 00:05:25.980 "allow_duplicated_isid": false, 00:05:25.980 "error_recovery_level": 0, 00:05:25.980 "nop_timeout": 60, 00:05:25.980 "nop_in_interval": 30, 00:05:25.980 "disable_chap": false, 00:05:25.980 "require_chap": false, 00:05:25.980 "mutual_chap": false, 00:05:25.980 "chap_group": 0, 00:05:25.980 "max_large_datain_per_connection": 64, 00:05:25.980 "max_r2t_per_connection": 4, 00:05:25.980 "pdu_pool_size": 36864, 00:05:25.980 "immediate_data_pool_size": 16384, 00:05:25.980 "data_out_pool_size": 2048 00:05:25.980 } 00:05:25.980 } 00:05:25.980 ] 00:05:25.980 } 00:05:25.980 ] 00:05:25.980 } 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57735 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57735 ']' 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57735 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57735 00:05:25.980 killing process with pid 57735 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57735' 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57735 00:05:25.980 01:25:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57735 00:05:27.887 01:25:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57780 00:05:27.887 01:25:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.887 01:25:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57780 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57780 ']' 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57780 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57780 00:05:33.162 killing process with pid 57780 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57780' 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57780 00:05:33.162 01:25:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57780 00:05:34.542 01:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.542 01:25:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:34.542 00:05:34.542 real 0m9.889s 00:05:34.542 user 0m9.556s 00:05:34.542 sys 0m0.731s 00:05:34.542 ************************************ 00:05:34.542 END TEST skip_rpc_with_json 00:05:34.542 ************************************ 00:05:34.542 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.542 01:25:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.542 01:25:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:34.542 01:25:42 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.542 01:25:42 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.542 01:25:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.543 ************************************ 00:05:34.543 START TEST skip_rpc_with_delay 00:05:34.543 ************************************ 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:34.543 01:25:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:34.543 [2024-11-17 01:25:42.979021] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:34.802 01:25:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:34.802 01:25:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:34.802 01:25:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:34.802 01:25:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:34.802 00:05:34.802 real 0m0.171s 00:05:34.802 user 0m0.102s 00:05:34.802 sys 0m0.067s 00:05:34.802 01:25:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.802 ************************************ 00:05:34.802 END TEST skip_rpc_with_delay 00:05:34.802 ************************************ 00:05:34.802 01:25:43 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:34.802 01:25:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:34.802 01:25:43 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:34.802 01:25:43 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:34.802 01:25:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.802 01:25:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.802 01:25:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.802 ************************************ 00:05:34.802 START TEST exit_on_failed_rpc_init 00:05:34.802 ************************************ 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57908 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57908 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57908 ']' 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.802 01:25:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.802 [2024-11-17 01:25:43.225534] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:34.802 [2024-11-17 01:25:43.225695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57908 ] 00:05:35.062 [2024-11-17 01:25:43.405362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.062 [2024-11-17 01:25:43.493156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.321 [2024-11-17 01:25:43.688659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:35.889 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:36.149 [2024-11-17 01:25:44.357132] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:36.149 [2024-11-17 01:25:44.357286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57926 ] 00:05:36.149 [2024-11-17 01:25:44.528613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.408 [2024-11-17 01:25:44.625238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:36.408 [2024-11-17 01:25:44.625384] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:36.408 [2024-11-17 01:25:44.625405] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:36.408 [2024-11-17 01:25:44.625421] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57908 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57908 ']' 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57908 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.711 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57908 00:05:36.712 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.712 killing process with pid 57908 00:05:36.712 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.712 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57908' 00:05:36.712 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57908 00:05:36.712 01:25:44 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57908 00:05:38.662 00:05:38.662 real 0m3.543s 00:05:38.662 user 0m4.103s 00:05:38.662 sys 0m0.499s 00:05:38.662 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.662 01:25:46 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:38.662 ************************************ 00:05:38.662 END TEST exit_on_failed_rpc_init 00:05:38.662 ************************************ 00:05:38.662 01:25:46 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:38.662 00:05:38.662 real 0m20.781s 00:05:38.662 user 0m20.297s 00:05:38.662 sys 0m1.821s 00:05:38.662 01:25:46 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.662 01:25:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.662 ************************************ 00:05:38.662 END TEST skip_rpc 00:05:38.662 ************************************ 00:05:38.662 01:25:46 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:38.662 01:25:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.662 01:25:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.662 01:25:46 -- common/autotest_common.sh@10 -- # set +x 00:05:38.662 ************************************ 00:05:38.662 START TEST rpc_client 00:05:38.662 ************************************ 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:38.662 * Looking for test storage... 00:05:38.662 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.662 01:25:46 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.662 --rc genhtml_branch_coverage=1 00:05:38.662 --rc genhtml_function_coverage=1 00:05:38.662 --rc genhtml_legend=1 00:05:38.662 --rc geninfo_all_blocks=1 00:05:38.662 --rc geninfo_unexecuted_blocks=1 00:05:38.662 00:05:38.662 ' 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.662 --rc genhtml_branch_coverage=1 00:05:38.662 --rc genhtml_function_coverage=1 00:05:38.662 --rc genhtml_legend=1 00:05:38.662 --rc geninfo_all_blocks=1 00:05:38.662 --rc geninfo_unexecuted_blocks=1 00:05:38.662 00:05:38.662 ' 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.662 --rc genhtml_branch_coverage=1 00:05:38.662 --rc genhtml_function_coverage=1 00:05:38.662 --rc genhtml_legend=1 00:05:38.662 --rc geninfo_all_blocks=1 00:05:38.662 --rc geninfo_unexecuted_blocks=1 00:05:38.662 00:05:38.662 ' 00:05:38.662 01:25:46 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.662 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.662 --rc genhtml_branch_coverage=1 00:05:38.662 --rc genhtml_function_coverage=1 00:05:38.662 --rc genhtml_legend=1 00:05:38.662 --rc geninfo_all_blocks=1 00:05:38.662 --rc geninfo_unexecuted_blocks=1 00:05:38.662 00:05:38.662 ' 00:05:38.662 01:25:46 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:38.662 OK 00:05:38.662 01:25:47 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:38.662 00:05:38.662 real 0m0.263s 00:05:38.662 user 0m0.153s 00:05:38.662 sys 0m0.121s 00:05:38.662 01:25:47 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:38.662 01:25:47 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:38.662 ************************************ 00:05:38.662 END TEST rpc_client 00:05:38.662 ************************************ 00:05:38.662 01:25:47 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:38.662 01:25:47 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.662 01:25:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.662 01:25:47 -- common/autotest_common.sh@10 -- # set +x 00:05:38.662 ************************************ 00:05:38.662 START TEST json_config 00:05:38.662 ************************************ 00:05:38.662 01:25:47 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.923 01:25:47 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.923 01:25:47 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.923 01:25:47 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.923 01:25:47 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.923 01:25:47 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.923 01:25:47 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:38.923 01:25:47 json_config -- scripts/common.sh@345 -- # : 1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.923 01:25:47 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.923 01:25:47 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@353 -- # local d=1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.923 01:25:47 json_config -- scripts/common.sh@355 -- # echo 1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.923 01:25:47 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@353 -- # local d=2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.923 01:25:47 json_config -- scripts/common.sh@355 -- # echo 2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.923 01:25:47 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.923 01:25:47 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.923 01:25:47 json_config -- scripts/common.sh@368 -- # return 0 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.923 --rc genhtml_branch_coverage=1 00:05:38.923 --rc genhtml_function_coverage=1 00:05:38.923 --rc genhtml_legend=1 00:05:38.923 --rc geninfo_all_blocks=1 00:05:38.923 --rc geninfo_unexecuted_blocks=1 00:05:38.923 00:05:38.923 ' 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.923 --rc genhtml_branch_coverage=1 00:05:38.923 --rc genhtml_function_coverage=1 00:05:38.923 --rc genhtml_legend=1 00:05:38.923 --rc geninfo_all_blocks=1 00:05:38.923 --rc geninfo_unexecuted_blocks=1 00:05:38.923 00:05:38.923 ' 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.923 --rc genhtml_branch_coverage=1 00:05:38.923 --rc genhtml_function_coverage=1 00:05:38.923 --rc genhtml_legend=1 00:05:38.923 --rc geninfo_all_blocks=1 00:05:38.923 --rc geninfo_unexecuted_blocks=1 00:05:38.923 00:05:38.923 ' 00:05:38.923 01:25:47 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.923 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.923 --rc genhtml_branch_coverage=1 00:05:38.923 --rc genhtml_function_coverage=1 00:05:38.923 --rc genhtml_legend=1 00:05:38.923 --rc geninfo_all_blocks=1 00:05:38.923 --rc geninfo_unexecuted_blocks=1 00:05:38.923 00:05:38.923 ' 00:05:38.923 01:25:47 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.923 01:25:47 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:38.923 01:25:47 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.923 01:25:47 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.923 01:25:47 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.923 01:25:47 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.923 01:25:47 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.923 01:25:47 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.924 01:25:47 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.924 01:25:47 json_config -- paths/export.sh@5 -- # export PATH 00:05:38.924 01:25:47 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@51 -- # : 0 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:38.924 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:38.924 01:25:47 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:38.924 INFO: JSON configuration test init 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.924 Waiting for target to run... 00:05:38.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:38.924 01:25:47 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:38.924 01:25:47 json_config -- json_config/common.sh@9 -- # local app=target 00:05:38.924 01:25:47 json_config -- json_config/common.sh@10 -- # shift 00:05:38.924 01:25:47 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:38.924 01:25:47 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:38.924 01:25:47 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:38.924 01:25:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.924 01:25:47 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:38.924 01:25:47 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58085 00:05:38.924 01:25:47 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:38.924 01:25:47 json_config -- json_config/common.sh@25 -- # waitforlisten 58085 /var/tmp/spdk_tgt.sock 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@835 -- # '[' -z 58085 ']' 00:05:38.924 01:25:47 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.924 01:25:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:39.184 [2024-11-17 01:25:47.398661] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:39.184 [2024-11-17 01:25:47.399126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58085 ] 00:05:39.443 [2024-11-17 01:25:47.722934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.443 [2024-11-17 01:25:47.849757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.012 01:25:48 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.012 01:25:48 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:40.012 01:25:48 json_config -- json_config/common.sh@26 -- # echo '' 00:05:40.012 00:05:40.012 01:25:48 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:40.012 01:25:48 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:40.012 01:25:48 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:40.012 01:25:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.012 01:25:48 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:40.012 01:25:48 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:40.012 01:25:48 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.012 01:25:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.012 01:25:48 json_config -- json_config/json_config.sh@280 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:40.012 01:25:48 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:40.012 01:25:48 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:40.580 [2024-11-17 01:25:48.918528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:41.158 01:25:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.158 01:25:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:41.158 01:25:49 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:41.158 01:25:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@54 -- # sort 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:41.418 01:25:49 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:41.418 01:25:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:41.418 01:25:49 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:41.418 01:25:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:41.418 01:25:49 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.418 01:25:49 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:41.677 MallocForNvmf0 00:05:41.677 01:25:50 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.678 01:25:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:41.937 MallocForNvmf1 00:05:41.937 01:25:50 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:41.937 01:25:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:42.505 [2024-11-17 01:25:50.672762] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.505 01:25:50 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.505 01:25:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:42.505 01:25:50 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.505 01:25:50 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:42.764 01:25:51 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:42.764 01:25:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:43.023 01:25:51 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.023 01:25:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:43.283 [2024-11-17 01:25:51.601551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:43.283 01:25:51 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:43.283 01:25:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.283 01:25:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.283 01:25:51 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:43.283 01:25:51 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.283 01:25:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.283 01:25:51 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:43.283 01:25:51 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.283 01:25:51 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:43.543 MallocBdevForConfigChangeCheck 00:05:43.802 01:25:52 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:43.802 01:25:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.802 01:25:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.802 01:25:52 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:43.802 01:25:52 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.061 INFO: shutting down applications... 00:05:44.061 01:25:52 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:44.062 01:25:52 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:44.062 01:25:52 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:44.062 01:25:52 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:44.062 01:25:52 json_config -- json_config/json_config.sh@340 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:44.630 Calling clear_iscsi_subsystem 00:05:44.630 Calling clear_nvmf_subsystem 00:05:44.630 Calling clear_nbd_subsystem 00:05:44.630 Calling clear_ublk_subsystem 00:05:44.630 Calling clear_vhost_blk_subsystem 00:05:44.630 Calling clear_vhost_scsi_subsystem 00:05:44.630 Calling clear_bdev_subsystem 00:05:44.630 01:25:52 json_config -- json_config/json_config.sh@344 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:05:44.630 01:25:52 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:44.630 01:25:52 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:44.630 01:25:52 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:44.630 01:25:52 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:05:44.630 01:25:52 json_config -- json_config/json_config.sh@352 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:44.889 01:25:53 json_config -- json_config/json_config.sh@352 -- # break 00:05:44.889 01:25:53 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:44.889 01:25:53 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:44.889 01:25:53 json_config -- json_config/common.sh@31 -- # local app=target 00:05:44.889 01:25:53 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:44.889 01:25:53 json_config -- json_config/common.sh@35 -- # [[ -n 58085 ]] 00:05:44.889 01:25:53 json_config -- json_config/common.sh@38 -- # kill -SIGINT 58085 00:05:44.889 01:25:53 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:44.889 01:25:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:44.889 01:25:53 json_config -- json_config/common.sh@41 -- # kill -0 58085 00:05:44.889 01:25:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:45.457 01:25:53 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:45.457 01:25:53 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:45.457 01:25:53 json_config -- json_config/common.sh@41 -- # kill -0 58085 00:05:45.457 01:25:53 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.024 01:25:54 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.024 01:25:54 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.024 01:25:54 json_config -- json_config/common.sh@41 -- # kill -0 58085 00:05:46.024 01:25:54 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.024 01:25:54 json_config -- json_config/common.sh@43 -- # break 00:05:46.024 01:25:54 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.024 SPDK target shutdown done 00:05:46.024 INFO: relaunching applications... 00:05:46.024 01:25:54 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.024 01:25:54 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:46.024 01:25:54 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.024 01:25:54 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.024 01:25:54 json_config -- json_config/common.sh@10 -- # shift 00:05:46.024 01:25:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.024 01:25:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.024 01:25:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.024 01:25:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.024 01:25:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.024 01:25:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=58299 00:05:46.024 01:25:54 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:46.024 Waiting for target to run... 00:05:46.024 01:25:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.024 01:25:54 json_config -- json_config/common.sh@25 -- # waitforlisten 58299 /var/tmp/spdk_tgt.sock 00:05:46.024 01:25:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 58299 ']' 00:05:46.024 01:25:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.024 01:25:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:46.024 01:25:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.024 01:25:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:46.024 01:25:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.024 [2024-11-17 01:25:54.425204] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:46.024 [2024-11-17 01:25:54.425686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58299 ] 00:05:46.593 [2024-11-17 01:25:54.765510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.593 [2024-11-17 01:25:54.852550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.852 [2024-11-17 01:25:55.152125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:47.420 [2024-11-17 01:25:55.718567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:47.420 [2024-11-17 01:25:55.750735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:47.420 00:05:47.420 INFO: Checking if target configuration is the same... 00:05:47.420 01:25:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:47.420 01:25:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:47.420 01:25:55 json_config -- json_config/common.sh@26 -- # echo '' 00:05:47.420 01:25:55 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:47.420 01:25:55 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:47.420 01:25:55 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:47.420 01:25:55 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:47.420 01:25:55 json_config -- json_config/json_config.sh@385 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:47.420 + '[' 2 -ne 2 ']' 00:05:47.420 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:47.420 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:47.420 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:47.420 +++ basename /dev/fd/62 00:05:47.420 ++ mktemp /tmp/62.XXX 00:05:47.420 + tmp_file_1=/tmp/62.qY7 00:05:47.420 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:47.420 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:47.420 + tmp_file_2=/tmp/spdk_tgt_config.json.vyo 00:05:47.420 + ret=0 00:05:47.420 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:47.988 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:47.989 + diff -u /tmp/62.qY7 /tmp/spdk_tgt_config.json.vyo 00:05:47.989 INFO: JSON config files are the same 00:05:47.989 + echo 'INFO: JSON config files are the same' 00:05:47.989 + rm /tmp/62.qY7 /tmp/spdk_tgt_config.json.vyo 00:05:47.989 + exit 0 00:05:47.989 INFO: changing configuration and checking if this can be detected... 00:05:47.989 01:25:56 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:47.989 01:25:56 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:47.989 01:25:56 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:47.989 01:25:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.247 01:25:56 json_config -- json_config/json_config.sh@394 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.248 01:25:56 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:48.248 01:25:56 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.248 + '[' 2 -ne 2 ']' 00:05:48.248 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:05:48.248 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:05:48.248 + rootdir=/home/vagrant/spdk_repo/spdk 00:05:48.248 +++ basename /dev/fd/62 00:05:48.248 ++ mktemp /tmp/62.XXX 00:05:48.248 + tmp_file_1=/tmp/62.A2C 00:05:48.248 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:48.248 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.248 + tmp_file_2=/tmp/spdk_tgt_config.json.AG8 00:05:48.248 + ret=0 00:05:48.248 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:48.816 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:05:48.816 + diff -u /tmp/62.A2C /tmp/spdk_tgt_config.json.AG8 00:05:48.816 + ret=1 00:05:48.816 + echo '=== Start of file: /tmp/62.A2C ===' 00:05:48.816 + cat /tmp/62.A2C 00:05:48.816 + echo '=== End of file: /tmp/62.A2C ===' 00:05:48.816 + echo '' 00:05:48.816 + echo '=== Start of file: /tmp/spdk_tgt_config.json.AG8 ===' 00:05:48.816 + cat /tmp/spdk_tgt_config.json.AG8 00:05:48.816 + echo '=== End of file: /tmp/spdk_tgt_config.json.AG8 ===' 00:05:48.816 + echo '' 00:05:48.816 + rm /tmp/62.A2C /tmp/spdk_tgt_config.json.AG8 00:05:48.816 + exit 1 00:05:48.816 INFO: configuration change detected. 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@324 -- # [[ -n 58299 ]] 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.816 01:25:57 json_config -- json_config/json_config.sh@330 -- # killprocess 58299 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@954 -- # '[' -z 58299 ']' 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@958 -- # kill -0 58299 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@959 -- # uname 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58299 00:05:48.816 killing process with pid 58299 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58299' 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@973 -- # kill 58299 00:05:48.816 01:25:57 json_config -- common/autotest_common.sh@978 -- # wait 58299 00:05:49.815 01:25:58 json_config -- json_config/json_config.sh@333 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:05:49.815 01:25:58 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:49.815 01:25:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:49.815 01:25:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.815 INFO: Success 00:05:49.815 01:25:58 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:49.815 01:25:58 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:49.815 00:05:49.815 real 0m10.985s 00:05:49.815 user 0m14.923s 00:05:49.815 sys 0m1.755s 00:05:49.815 ************************************ 00:05:49.815 END TEST json_config 00:05:49.815 ************************************ 00:05:49.815 01:25:58 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.815 01:25:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.815 01:25:58 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.815 01:25:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.815 01:25:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.815 01:25:58 -- common/autotest_common.sh@10 -- # set +x 00:05:49.815 ************************************ 00:05:49.815 START TEST json_config_extra_key 00:05:49.815 ************************************ 00:05:49.815 01:25:58 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:49.815 01:25:58 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.815 01:25:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.815 01:25:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.815 01:25:58 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.815 01:25:58 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:50.090 01:25:58 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:50.090 01:25:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:50.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.090 --rc genhtml_branch_coverage=1 00:05:50.090 --rc genhtml_function_coverage=1 00:05:50.090 --rc genhtml_legend=1 00:05:50.090 --rc geninfo_all_blocks=1 00:05:50.090 --rc geninfo_unexecuted_blocks=1 00:05:50.090 00:05:50.090 ' 00:05:50.090 01:25:58 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:50.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.090 --rc genhtml_branch_coverage=1 00:05:50.090 --rc genhtml_function_coverage=1 00:05:50.090 --rc genhtml_legend=1 00:05:50.090 --rc geninfo_all_blocks=1 00:05:50.090 --rc geninfo_unexecuted_blocks=1 00:05:50.090 00:05:50.090 ' 00:05:50.090 01:25:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:50.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.090 --rc genhtml_branch_coverage=1 00:05:50.090 --rc genhtml_function_coverage=1 00:05:50.090 --rc genhtml_legend=1 00:05:50.090 --rc geninfo_all_blocks=1 00:05:50.090 --rc geninfo_unexecuted_blocks=1 00:05:50.090 00:05:50.090 ' 00:05:50.090 01:25:58 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:50.090 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:50.090 --rc genhtml_branch_coverage=1 00:05:50.090 --rc genhtml_function_coverage=1 00:05:50.090 --rc genhtml_legend=1 00:05:50.090 --rc geninfo_all_blocks=1 00:05:50.090 --rc geninfo_unexecuted_blocks=1 00:05:50.090 00:05:50.090 ' 00:05:50.090 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.090 01:25:58 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.090 01:25:58 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.090 01:25:58 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.090 01:25:58 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.091 01:25:58 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.091 01:25:58 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:50.091 01:25:58 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:50.091 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:50.091 01:25:58 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:50.091 INFO: launching applications... 00:05:50.091 Waiting for target to run... 00:05:50.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:50.091 01:25:58 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=58465 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 58465 /var/tmp/spdk_tgt.sock 00:05:50.091 01:25:58 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 58465 ']' 00:05:50.091 01:25:58 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.091 01:25:58 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:50.091 01:25:58 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:50.091 01:25:58 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.091 01:25:58 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:50.091 01:25:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.091 [2024-11-17 01:25:58.424775] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:50.091 [2024-11-17 01:25:58.425218] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58465 ] 00:05:50.350 [2024-11-17 01:25:58.772648] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.609 [2024-11-17 01:25:58.848980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.609 [2024-11-17 01:25:59.057291] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:51.178 01:25:59 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:51.178 01:25:59 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:51.178 00:05:51.178 01:25:59 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:51.178 INFO: shutting down applications... 00:05:51.178 01:25:59 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 58465 ]] 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 58465 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58465 00:05:51.178 01:25:59 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:51.746 01:26:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:51.747 01:26:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:51.747 01:26:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58465 00:05:51.747 01:26:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.314 01:26:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.314 01:26:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.314 01:26:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58465 00:05:52.314 01:26:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:52.574 01:26:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:52.574 01:26:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:52.574 01:26:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58465 00:05:52.574 01:26:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.141 01:26:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.141 01:26:01 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.141 01:26:01 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58465 00:05:53.141 01:26:01 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 58465 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:53.709 SPDK target shutdown done 00:05:53.709 Success 00:05:53.709 01:26:02 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:53.710 01:26:02 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:53.710 00:05:53.710 real 0m3.933s 00:05:53.710 user 0m3.604s 00:05:53.710 sys 0m0.509s 00:05:53.710 01:26:02 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.710 ************************************ 00:05:53.710 END TEST json_config_extra_key 00:05:53.710 ************************************ 00:05:53.710 01:26:02 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:53.710 01:26:02 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.710 01:26:02 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.710 01:26:02 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.710 01:26:02 -- common/autotest_common.sh@10 -- # set +x 00:05:53.710 ************************************ 00:05:53.710 START TEST alias_rpc 00:05:53.710 ************************************ 00:05:53.710 01:26:02 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:53.710 * Looking for test storage... 00:05:53.710 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:53.710 01:26:02 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.970 01:26:02 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.970 --rc genhtml_branch_coverage=1 00:05:53.970 --rc genhtml_function_coverage=1 00:05:53.970 --rc genhtml_legend=1 00:05:53.970 --rc geninfo_all_blocks=1 00:05:53.970 --rc geninfo_unexecuted_blocks=1 00:05:53.970 00:05:53.970 ' 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.970 --rc genhtml_branch_coverage=1 00:05:53.970 --rc genhtml_function_coverage=1 00:05:53.970 --rc genhtml_legend=1 00:05:53.970 --rc geninfo_all_blocks=1 00:05:53.970 --rc geninfo_unexecuted_blocks=1 00:05:53.970 00:05:53.970 ' 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.970 --rc genhtml_branch_coverage=1 00:05:53.970 --rc genhtml_function_coverage=1 00:05:53.970 --rc genhtml_legend=1 00:05:53.970 --rc geninfo_all_blocks=1 00:05:53.970 --rc geninfo_unexecuted_blocks=1 00:05:53.970 00:05:53.970 ' 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.970 --rc genhtml_branch_coverage=1 00:05:53.970 --rc genhtml_function_coverage=1 00:05:53.970 --rc genhtml_legend=1 00:05:53.970 --rc geninfo_all_blocks=1 00:05:53.970 --rc geninfo_unexecuted_blocks=1 00:05:53.970 00:05:53.970 ' 00:05:53.970 01:26:02 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:53.970 01:26:02 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=58560 00:05:53.970 01:26:02 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:53.970 01:26:02 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 58560 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 58560 ']' 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.970 01:26:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.970 [2024-11-17 01:26:02.393261] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:53.970 [2024-11-17 01:26:02.393588] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58560 ] 00:05:54.229 [2024-11-17 01:26:02.566035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:54.229 [2024-11-17 01:26:02.660108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.489 [2024-11-17 01:26:02.854083] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:55.058 01:26:03 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:55.058 01:26:03 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:55.058 01:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:55.318 01:26:03 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 58560 00:05:55.318 01:26:03 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 58560 ']' 00:05:55.318 01:26:03 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 58560 00:05:55.318 01:26:03 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:55.318 01:26:03 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.318 01:26:03 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58560 00:05:55.577 killing process with pid 58560 00:05:55.577 01:26:03 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.577 01:26:03 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.577 01:26:03 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58560' 00:05:55.577 01:26:03 alias_rpc -- common/autotest_common.sh@973 -- # kill 58560 00:05:55.577 01:26:03 alias_rpc -- common/autotest_common.sh@978 -- # wait 58560 00:05:57.483 ************************************ 00:05:57.483 END TEST alias_rpc 00:05:57.483 ************************************ 00:05:57.483 00:05:57.483 real 0m3.416s 00:05:57.483 user 0m3.762s 00:05:57.483 sys 0m0.482s 00:05:57.483 01:26:05 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:57.483 01:26:05 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 01:26:05 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:57.483 01:26:05 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:57.483 01:26:05 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:57.483 01:26:05 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:57.483 01:26:05 -- common/autotest_common.sh@10 -- # set +x 00:05:57.483 ************************************ 00:05:57.483 START TEST spdkcli_tcp 00:05:57.483 ************************************ 00:05:57.483 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:57.483 * Looking for test storage... 00:05:57.483 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:57.483 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:57.483 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:57.483 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:57.483 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:57.483 01:26:05 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:57.484 01:26:05 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.484 --rc genhtml_branch_coverage=1 00:05:57.484 --rc genhtml_function_coverage=1 00:05:57.484 --rc genhtml_legend=1 00:05:57.484 --rc geninfo_all_blocks=1 00:05:57.484 --rc geninfo_unexecuted_blocks=1 00:05:57.484 00:05:57.484 ' 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.484 --rc genhtml_branch_coverage=1 00:05:57.484 --rc genhtml_function_coverage=1 00:05:57.484 --rc genhtml_legend=1 00:05:57.484 --rc geninfo_all_blocks=1 00:05:57.484 --rc geninfo_unexecuted_blocks=1 00:05:57.484 00:05:57.484 ' 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.484 --rc genhtml_branch_coverage=1 00:05:57.484 --rc genhtml_function_coverage=1 00:05:57.484 --rc genhtml_legend=1 00:05:57.484 --rc geninfo_all_blocks=1 00:05:57.484 --rc geninfo_unexecuted_blocks=1 00:05:57.484 00:05:57.484 ' 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:57.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:57.484 --rc genhtml_branch_coverage=1 00:05:57.484 --rc genhtml_function_coverage=1 00:05:57.484 --rc genhtml_legend=1 00:05:57.484 --rc geninfo_all_blocks=1 00:05:57.484 --rc geninfo_unexecuted_blocks=1 00:05:57.484 00:05:57.484 ' 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=58666 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:57.484 01:26:05 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 58666 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 58666 ']' 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:57.484 01:26:05 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:57.484 [2024-11-17 01:26:05.845899] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:05:57.484 [2024-11-17 01:26:05.846263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58666 ] 00:05:57.743 [2024-11-17 01:26:06.013425] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:57.743 [2024-11-17 01:26:06.102350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.743 [2024-11-17 01:26:06.102368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.002 [2024-11-17 01:26:06.303266] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:05:58.570 01:26:06 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:58.570 01:26:06 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:58.570 01:26:06 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=58683 00:05:58.570 01:26:06 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:58.570 01:26:06 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:58.830 [ 00:05:58.830 "bdev_malloc_delete", 00:05:58.830 "bdev_malloc_create", 00:05:58.830 "bdev_null_resize", 00:05:58.830 "bdev_null_delete", 00:05:58.830 "bdev_null_create", 00:05:58.830 "bdev_nvme_cuse_unregister", 00:05:58.830 "bdev_nvme_cuse_register", 00:05:58.830 "bdev_opal_new_user", 00:05:58.830 "bdev_opal_set_lock_state", 00:05:58.830 "bdev_opal_delete", 00:05:58.830 "bdev_opal_get_info", 00:05:58.830 "bdev_opal_create", 00:05:58.830 "bdev_nvme_opal_revert", 00:05:58.830 "bdev_nvme_opal_init", 00:05:58.830 "bdev_nvme_send_cmd", 00:05:58.830 "bdev_nvme_set_keys", 00:05:58.830 "bdev_nvme_get_path_iostat", 00:05:58.830 "bdev_nvme_get_mdns_discovery_info", 00:05:58.830 "bdev_nvme_stop_mdns_discovery", 00:05:58.830 "bdev_nvme_start_mdns_discovery", 00:05:58.830 "bdev_nvme_set_multipath_policy", 00:05:58.830 "bdev_nvme_set_preferred_path", 00:05:58.830 "bdev_nvme_get_io_paths", 00:05:58.830 "bdev_nvme_remove_error_injection", 00:05:58.830 "bdev_nvme_add_error_injection", 00:05:58.830 "bdev_nvme_get_discovery_info", 00:05:58.830 "bdev_nvme_stop_discovery", 00:05:58.830 "bdev_nvme_start_discovery", 00:05:58.830 "bdev_nvme_get_controller_health_info", 00:05:58.830 "bdev_nvme_disable_controller", 00:05:58.830 "bdev_nvme_enable_controller", 00:05:58.830 "bdev_nvme_reset_controller", 00:05:58.830 "bdev_nvme_get_transport_statistics", 00:05:58.830 "bdev_nvme_apply_firmware", 00:05:58.830 "bdev_nvme_detach_controller", 00:05:58.830 "bdev_nvme_get_controllers", 00:05:58.830 "bdev_nvme_attach_controller", 00:05:58.830 "bdev_nvme_set_hotplug", 00:05:58.830 "bdev_nvme_set_options", 00:05:58.830 "bdev_passthru_delete", 00:05:58.830 "bdev_passthru_create", 00:05:58.830 "bdev_lvol_set_parent_bdev", 00:05:58.830 "bdev_lvol_set_parent", 00:05:58.830 "bdev_lvol_check_shallow_copy", 00:05:58.830 "bdev_lvol_start_shallow_copy", 00:05:58.830 "bdev_lvol_grow_lvstore", 00:05:58.830 "bdev_lvol_get_lvols", 00:05:58.830 "bdev_lvol_get_lvstores", 00:05:58.830 "bdev_lvol_delete", 00:05:58.830 "bdev_lvol_set_read_only", 00:05:58.830 "bdev_lvol_resize", 00:05:58.830 "bdev_lvol_decouple_parent", 00:05:58.830 "bdev_lvol_inflate", 00:05:58.830 "bdev_lvol_rename", 00:05:58.830 "bdev_lvol_clone_bdev", 00:05:58.830 "bdev_lvol_clone", 00:05:58.830 "bdev_lvol_snapshot", 00:05:58.830 "bdev_lvol_create", 00:05:58.830 "bdev_lvol_delete_lvstore", 00:05:58.830 "bdev_lvol_rename_lvstore", 00:05:58.830 "bdev_lvol_create_lvstore", 00:05:58.830 "bdev_raid_set_options", 00:05:58.830 "bdev_raid_remove_base_bdev", 00:05:58.830 "bdev_raid_add_base_bdev", 00:05:58.830 "bdev_raid_delete", 00:05:58.830 "bdev_raid_create", 00:05:58.830 "bdev_raid_get_bdevs", 00:05:58.830 "bdev_error_inject_error", 00:05:58.830 "bdev_error_delete", 00:05:58.830 "bdev_error_create", 00:05:58.830 "bdev_split_delete", 00:05:58.830 "bdev_split_create", 00:05:58.830 "bdev_delay_delete", 00:05:58.830 "bdev_delay_create", 00:05:58.830 "bdev_delay_update_latency", 00:05:58.830 "bdev_zone_block_delete", 00:05:58.830 "bdev_zone_block_create", 00:05:58.830 "blobfs_create", 00:05:58.830 "blobfs_detect", 00:05:58.830 "blobfs_set_cache_size", 00:05:58.830 "bdev_aio_delete", 00:05:58.830 "bdev_aio_rescan", 00:05:58.830 "bdev_aio_create", 00:05:58.830 "bdev_ftl_set_property", 00:05:58.830 "bdev_ftl_get_properties", 00:05:58.830 "bdev_ftl_get_stats", 00:05:58.830 "bdev_ftl_unmap", 00:05:58.830 "bdev_ftl_unload", 00:05:58.830 "bdev_ftl_delete", 00:05:58.830 "bdev_ftl_load", 00:05:58.830 "bdev_ftl_create", 00:05:58.830 "bdev_virtio_attach_controller", 00:05:58.830 "bdev_virtio_scsi_get_devices", 00:05:58.830 "bdev_virtio_detach_controller", 00:05:58.830 "bdev_virtio_blk_set_hotplug", 00:05:58.830 "bdev_iscsi_delete", 00:05:58.830 "bdev_iscsi_create", 00:05:58.830 "bdev_iscsi_set_options", 00:05:58.830 "bdev_uring_delete", 00:05:58.830 "bdev_uring_rescan", 00:05:58.830 "bdev_uring_create", 00:05:58.830 "accel_error_inject_error", 00:05:58.830 "ioat_scan_accel_module", 00:05:58.830 "dsa_scan_accel_module", 00:05:58.830 "iaa_scan_accel_module", 00:05:58.830 "vfu_virtio_create_fs_endpoint", 00:05:58.830 "vfu_virtio_create_scsi_endpoint", 00:05:58.830 "vfu_virtio_scsi_remove_target", 00:05:58.830 "vfu_virtio_scsi_add_target", 00:05:58.830 "vfu_virtio_create_blk_endpoint", 00:05:58.830 "vfu_virtio_delete_endpoint", 00:05:58.830 "keyring_file_remove_key", 00:05:58.830 "keyring_file_add_key", 00:05:58.830 "keyring_linux_set_options", 00:05:58.830 "fsdev_aio_delete", 00:05:58.830 "fsdev_aio_create", 00:05:58.830 "iscsi_get_histogram", 00:05:58.830 "iscsi_enable_histogram", 00:05:58.830 "iscsi_set_options", 00:05:58.830 "iscsi_get_auth_groups", 00:05:58.830 "iscsi_auth_group_remove_secret", 00:05:58.830 "iscsi_auth_group_add_secret", 00:05:58.830 "iscsi_delete_auth_group", 00:05:58.830 "iscsi_create_auth_group", 00:05:58.830 "iscsi_set_discovery_auth", 00:05:58.830 "iscsi_get_options", 00:05:58.830 "iscsi_target_node_request_logout", 00:05:58.830 "iscsi_target_node_set_redirect", 00:05:58.830 "iscsi_target_node_set_auth", 00:05:58.830 "iscsi_target_node_add_lun", 00:05:58.830 "iscsi_get_stats", 00:05:58.830 "iscsi_get_connections", 00:05:58.830 "iscsi_portal_group_set_auth", 00:05:58.830 "iscsi_start_portal_group", 00:05:58.830 "iscsi_delete_portal_group", 00:05:58.830 "iscsi_create_portal_group", 00:05:58.830 "iscsi_get_portal_groups", 00:05:58.830 "iscsi_delete_target_node", 00:05:58.830 "iscsi_target_node_remove_pg_ig_maps", 00:05:58.830 "iscsi_target_node_add_pg_ig_maps", 00:05:58.830 "iscsi_create_target_node", 00:05:58.830 "iscsi_get_target_nodes", 00:05:58.830 "iscsi_delete_initiator_group", 00:05:58.830 "iscsi_initiator_group_remove_initiators", 00:05:58.830 "iscsi_initiator_group_add_initiators", 00:05:58.830 "iscsi_create_initiator_group", 00:05:58.830 "iscsi_get_initiator_groups", 00:05:58.830 "nvmf_set_crdt", 00:05:58.830 "nvmf_set_config", 00:05:58.830 "nvmf_set_max_subsystems", 00:05:58.830 "nvmf_stop_mdns_prr", 00:05:58.830 "nvmf_publish_mdns_prr", 00:05:58.830 "nvmf_subsystem_get_listeners", 00:05:58.830 "nvmf_subsystem_get_qpairs", 00:05:58.830 "nvmf_subsystem_get_controllers", 00:05:58.830 "nvmf_get_stats", 00:05:58.830 "nvmf_get_transports", 00:05:58.830 "nvmf_create_transport", 00:05:58.830 "nvmf_get_targets", 00:05:58.830 "nvmf_delete_target", 00:05:58.830 "nvmf_create_target", 00:05:58.830 "nvmf_subsystem_allow_any_host", 00:05:58.830 "nvmf_subsystem_set_keys", 00:05:58.831 "nvmf_subsystem_remove_host", 00:05:58.831 "nvmf_subsystem_add_host", 00:05:58.831 "nvmf_ns_remove_host", 00:05:58.831 "nvmf_ns_add_host", 00:05:58.831 "nvmf_subsystem_remove_ns", 00:05:58.831 "nvmf_subsystem_set_ns_ana_group", 00:05:58.831 "nvmf_subsystem_add_ns", 00:05:58.831 "nvmf_subsystem_listener_set_ana_state", 00:05:58.831 "nvmf_discovery_get_referrals", 00:05:58.831 "nvmf_discovery_remove_referral", 00:05:58.831 "nvmf_discovery_add_referral", 00:05:58.831 "nvmf_subsystem_remove_listener", 00:05:58.831 "nvmf_subsystem_add_listener", 00:05:58.831 "nvmf_delete_subsystem", 00:05:58.831 "nvmf_create_subsystem", 00:05:58.831 "nvmf_get_subsystems", 00:05:58.831 "env_dpdk_get_mem_stats", 00:05:58.831 "nbd_get_disks", 00:05:58.831 "nbd_stop_disk", 00:05:58.831 "nbd_start_disk", 00:05:58.831 "ublk_recover_disk", 00:05:58.831 "ublk_get_disks", 00:05:58.831 "ublk_stop_disk", 00:05:58.831 "ublk_start_disk", 00:05:58.831 "ublk_destroy_target", 00:05:58.831 "ublk_create_target", 00:05:58.831 "virtio_blk_create_transport", 00:05:58.831 "virtio_blk_get_transports", 00:05:58.831 "vhost_controller_set_coalescing", 00:05:58.831 "vhost_get_controllers", 00:05:58.831 "vhost_delete_controller", 00:05:58.831 "vhost_create_blk_controller", 00:05:58.831 "vhost_scsi_controller_remove_target", 00:05:58.831 "vhost_scsi_controller_add_target", 00:05:58.831 "vhost_start_scsi_controller", 00:05:58.831 "vhost_create_scsi_controller", 00:05:58.831 "thread_set_cpumask", 00:05:58.831 "scheduler_set_options", 00:05:58.831 "framework_get_governor", 00:05:58.831 "framework_get_scheduler", 00:05:58.831 "framework_set_scheduler", 00:05:58.831 "framework_get_reactors", 00:05:58.831 "thread_get_io_channels", 00:05:58.831 "thread_get_pollers", 00:05:58.831 "thread_get_stats", 00:05:58.831 "framework_monitor_context_switch", 00:05:58.831 "spdk_kill_instance", 00:05:58.831 "log_enable_timestamps", 00:05:58.831 "log_get_flags", 00:05:58.831 "log_clear_flag", 00:05:58.831 "log_set_flag", 00:05:58.831 "log_get_level", 00:05:58.831 "log_set_level", 00:05:58.831 "log_get_print_level", 00:05:58.831 "log_set_print_level", 00:05:58.831 "framework_enable_cpumask_locks", 00:05:58.831 "framework_disable_cpumask_locks", 00:05:58.831 "framework_wait_init", 00:05:58.831 "framework_start_init", 00:05:58.831 "scsi_get_devices", 00:05:58.831 "bdev_get_histogram", 00:05:58.831 "bdev_enable_histogram", 00:05:58.831 "bdev_set_qos_limit", 00:05:58.831 "bdev_set_qd_sampling_period", 00:05:58.831 "bdev_get_bdevs", 00:05:58.831 "bdev_reset_iostat", 00:05:58.831 "bdev_get_iostat", 00:05:58.831 "bdev_examine", 00:05:58.831 "bdev_wait_for_examine", 00:05:58.831 "bdev_set_options", 00:05:58.831 "accel_get_stats", 00:05:58.831 "accel_set_options", 00:05:58.831 "accel_set_driver", 00:05:58.831 "accel_crypto_key_destroy", 00:05:58.831 "accel_crypto_keys_get", 00:05:58.831 "accel_crypto_key_create", 00:05:58.831 "accel_assign_opc", 00:05:58.831 "accel_get_module_info", 00:05:58.831 "accel_get_opc_assignments", 00:05:58.831 "vmd_rescan", 00:05:58.831 "vmd_remove_device", 00:05:58.831 "vmd_enable", 00:05:58.831 "sock_get_default_impl", 00:05:58.831 "sock_set_default_impl", 00:05:58.831 "sock_impl_set_options", 00:05:58.831 "sock_impl_get_options", 00:05:58.831 "iobuf_get_stats", 00:05:58.831 "iobuf_set_options", 00:05:58.831 "keyring_get_keys", 00:05:58.831 "vfu_tgt_set_base_path", 00:05:58.831 "framework_get_pci_devices", 00:05:58.831 "framework_get_config", 00:05:58.831 "framework_get_subsystems", 00:05:58.831 "fsdev_set_opts", 00:05:58.831 "fsdev_get_opts", 00:05:58.831 "trace_get_info", 00:05:58.831 "trace_get_tpoint_group_mask", 00:05:58.831 "trace_disable_tpoint_group", 00:05:58.831 "trace_enable_tpoint_group", 00:05:58.831 "trace_clear_tpoint_mask", 00:05:58.831 "trace_set_tpoint_mask", 00:05:58.831 "notify_get_notifications", 00:05:58.831 "notify_get_types", 00:05:58.831 "spdk_get_version", 00:05:58.831 "rpc_get_methods" 00:05:58.831 ] 00:05:58.831 01:26:07 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:58.831 01:26:07 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:58.831 01:26:07 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 58666 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 58666 ']' 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 58666 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58666 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58666' 00:05:58.831 killing process with pid 58666 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 58666 00:05:58.831 01:26:07 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 58666 00:06:00.739 ************************************ 00:06:00.739 END TEST spdkcli_tcp 00:06:00.739 ************************************ 00:06:00.739 00:06:00.739 real 0m3.386s 00:06:00.739 user 0m6.267s 00:06:00.739 sys 0m0.535s 00:06:00.739 01:26:08 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.739 01:26:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:00.739 01:26:08 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.739 01:26:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.739 01:26:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.739 01:26:08 -- common/autotest_common.sh@10 -- # set +x 00:06:00.739 ************************************ 00:06:00.739 START TEST dpdk_mem_utility 00:06:00.739 ************************************ 00:06:00.739 01:26:08 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:00.739 * Looking for test storage... 00:06:00.739 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:00.739 01:26:09 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:00.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.739 --rc genhtml_branch_coverage=1 00:06:00.739 --rc genhtml_function_coverage=1 00:06:00.739 --rc genhtml_legend=1 00:06:00.739 --rc geninfo_all_blocks=1 00:06:00.739 --rc geninfo_unexecuted_blocks=1 00:06:00.739 00:06:00.739 ' 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:00.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.739 --rc genhtml_branch_coverage=1 00:06:00.739 --rc genhtml_function_coverage=1 00:06:00.739 --rc genhtml_legend=1 00:06:00.739 --rc geninfo_all_blocks=1 00:06:00.739 --rc geninfo_unexecuted_blocks=1 00:06:00.739 00:06:00.739 ' 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:00.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.739 --rc genhtml_branch_coverage=1 00:06:00.739 --rc genhtml_function_coverage=1 00:06:00.739 --rc genhtml_legend=1 00:06:00.739 --rc geninfo_all_blocks=1 00:06:00.739 --rc geninfo_unexecuted_blocks=1 00:06:00.739 00:06:00.739 ' 00:06:00.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:00.739 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:00.739 --rc genhtml_branch_coverage=1 00:06:00.739 --rc genhtml_function_coverage=1 00:06:00.739 --rc genhtml_legend=1 00:06:00.739 --rc geninfo_all_blocks=1 00:06:00.739 --rc geninfo_unexecuted_blocks=1 00:06:00.739 00:06:00.739 ' 00:06:00.739 01:26:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:00.739 01:26:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58777 00:06:00.739 01:26:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58777 00:06:00.739 01:26:09 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 58777 ']' 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.739 01:26:09 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:00.998 [2024-11-17 01:26:09.313429] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:00.998 [2024-11-17 01:26:09.313852] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58777 ] 00:06:01.257 [2024-11-17 01:26:09.487325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.257 [2024-11-17 01:26:09.581150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.516 [2024-11-17 01:26:09.770565] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:01.775 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:01.775 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:01.775 01:26:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:01.776 01:26:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:01.776 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:01.776 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:01.776 { 00:06:01.776 "filename": "/tmp/spdk_mem_dump.txt" 00:06:01.776 } 00:06:01.776 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:01.776 01:26:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:02.036 DPDK memory size 816.000000 MiB in 1 heap(s) 00:06:02.036 1 heaps totaling size 816.000000 MiB 00:06:02.036 size: 816.000000 MiB heap id: 0 00:06:02.036 end heaps---------- 00:06:02.036 9 mempools totaling size 595.772034 MiB 00:06:02.036 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:02.036 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:02.036 size: 92.545471 MiB name: bdev_io_58777 00:06:02.036 size: 50.003479 MiB name: msgpool_58777 00:06:02.036 size: 36.509338 MiB name: fsdev_io_58777 00:06:02.036 size: 21.763794 MiB name: PDU_Pool 00:06:02.036 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:02.036 size: 4.133484 MiB name: evtpool_58777 00:06:02.036 size: 0.026123 MiB name: Session_Pool 00:06:02.036 end mempools------- 00:06:02.036 6 memzones totaling size 4.142822 MiB 00:06:02.036 size: 1.000366 MiB name: RG_ring_0_58777 00:06:02.036 size: 1.000366 MiB name: RG_ring_1_58777 00:06:02.036 size: 1.000366 MiB name: RG_ring_4_58777 00:06:02.036 size: 1.000366 MiB name: RG_ring_5_58777 00:06:02.036 size: 0.125366 MiB name: RG_ring_2_58777 00:06:02.036 size: 0.015991 MiB name: RG_ring_3_58777 00:06:02.036 end memzones------- 00:06:02.036 01:26:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:02.036 heap id: 0 total size: 816.000000 MiB number of busy elements: 319 number of free elements: 18 00:06:02.036 list of free elements. size: 16.790405 MiB 00:06:02.036 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:02.036 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:02.036 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:02.036 element at address: 0x200018d00040 with size: 0.999939 MiB 00:06:02.036 element at address: 0x200019100040 with size: 0.999939 MiB 00:06:02.036 element at address: 0x200019200000 with size: 0.999084 MiB 00:06:02.036 element at address: 0x200031e00000 with size: 0.994324 MiB 00:06:02.036 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:02.036 element at address: 0x200018a00000 with size: 0.959656 MiB 00:06:02.036 element at address: 0x200019500040 with size: 0.936401 MiB 00:06:02.036 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:02.036 element at address: 0x20001ac00000 with size: 0.560974 MiB 00:06:02.036 element at address: 0x200000c00000 with size: 0.490173 MiB 00:06:02.036 element at address: 0x200018e00000 with size: 0.487976 MiB 00:06:02.036 element at address: 0x200019600000 with size: 0.485413 MiB 00:06:02.036 element at address: 0x200012c00000 with size: 0.443237 MiB 00:06:02.036 element at address: 0x200028000000 with size: 0.390442 MiB 00:06:02.036 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:02.036 list of standard malloc elements. size: 199.288696 MiB 00:06:02.036 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:02.036 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:02.036 element at address: 0x200018bfff80 with size: 1.000183 MiB 00:06:02.036 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:06:02.036 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:02.036 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:02.036 element at address: 0x2000195eff40 with size: 0.062683 MiB 00:06:02.036 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:02.036 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:02.036 element at address: 0x2000195efdc0 with size: 0.000366 MiB 00:06:02.036 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:02.036 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:02.036 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71780 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71880 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71980 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71a80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71b80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71c80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71d80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71e80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c71f80 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c72080 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012c72180 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200012cf24c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018afdd00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7cec0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7cfc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d0c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d1c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d2c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d3c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d4c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d5c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d6c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d7c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d8c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018e7d9c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x200018efdd00 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000192ffc40 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000195efbc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000195efcc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x2000196bc680 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8f9c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8fac0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8fbc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8fcc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8fdc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8fec0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac8ffc0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac900c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac901c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac902c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac903c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac904c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac905c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac906c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac907c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac908c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac909c0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac90ac0 with size: 0.000244 MiB 00:06:02.037 element at address: 0x20001ac90bc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac90cc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac90dc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac90ec0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac90fc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac910c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac911c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac912c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac913c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac914c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac915c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac916c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac917c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac918c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac919c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac91ac0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac91bc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac91cc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac91dc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac91ec0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac91fc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac920c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac921c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac922c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac923c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac924c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac925c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac926c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac927c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac928c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac929c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac92ac0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac92bc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac92cc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac92dc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac92ec0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac92fc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac930c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac931c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac932c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac933c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac934c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac935c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac936c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac937c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac938c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac939c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac93ac0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac93bc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac93cc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac93dc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac93ec0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac93fc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac940c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac941c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac942c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac943c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac944c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac945c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac946c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac947c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac948c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac949c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac94ac0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac94bc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac94cc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac94dc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac94ec0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac94fc0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac950c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac951c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac952c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20001ac953c0 with size: 0.000244 MiB 00:06:02.038 element at address: 0x200028063f40 with size: 0.000244 MiB 00:06:02.038 element at address: 0x200028064040 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ad00 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806af80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b080 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b180 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b280 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b380 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b480 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b580 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b680 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b780 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b880 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806b980 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ba80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806bb80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806bc80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806bd80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806be80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806bf80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c080 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c180 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c280 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c380 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c480 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c580 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c680 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c780 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c880 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806c980 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ca80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806cb80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806cc80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806cd80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ce80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806cf80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d080 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d180 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d280 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d380 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d480 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d580 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d680 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d780 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d880 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806d980 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806da80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806db80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806dc80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806dd80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806de80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806df80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e080 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e180 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e280 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e380 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e480 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e580 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e680 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e780 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e880 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806e980 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ea80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806eb80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ec80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ed80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ee80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806ef80 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806f080 with size: 0.000244 MiB 00:06:02.038 element at address: 0x20002806f180 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f280 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f380 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f480 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f580 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f680 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f780 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f880 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806f980 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806fa80 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806fb80 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806fc80 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806fd80 with size: 0.000244 MiB 00:06:02.039 element at address: 0x20002806fe80 with size: 0.000244 MiB 00:06:02.039 list of memzone associated elements. size: 599.920898 MiB 00:06:02.039 element at address: 0x20001ac954c0 with size: 211.416809 MiB 00:06:02.039 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:02.039 element at address: 0x20002806ff80 with size: 157.562622 MiB 00:06:02.039 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:02.039 element at address: 0x200012df4740 with size: 92.045105 MiB 00:06:02.039 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58777_0 00:06:02.039 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:02.039 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58777_0 00:06:02.039 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:02.039 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58777_0 00:06:02.039 element at address: 0x2000197be900 with size: 20.255615 MiB 00:06:02.039 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:02.039 element at address: 0x200031ffeb00 with size: 18.005127 MiB 00:06:02.039 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:02.039 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:02.039 associated memzone info: size: 3.000122 MiB name: MP_evtpool_58777_0 00:06:02.039 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:02.039 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58777 00:06:02.039 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:02.039 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58777 00:06:02.039 element at address: 0x200018efde00 with size: 1.008179 MiB 00:06:02.039 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:02.039 element at address: 0x2000196bc780 with size: 1.008179 MiB 00:06:02.039 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:02.039 element at address: 0x200018afde00 with size: 1.008179 MiB 00:06:02.039 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:02.039 element at address: 0x200012cf25c0 with size: 1.008179 MiB 00:06:02.039 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:02.039 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:02.039 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58777 00:06:02.039 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:02.039 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58777 00:06:02.039 element at address: 0x2000192ffd40 with size: 1.000549 MiB 00:06:02.039 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58777 00:06:02.039 element at address: 0x200031efe8c0 with size: 1.000549 MiB 00:06:02.039 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58777 00:06:02.039 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:02.039 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58777 00:06:02.039 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:02.039 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58777 00:06:02.039 element at address: 0x200018e7dac0 with size: 0.500549 MiB 00:06:02.039 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:02.039 element at address: 0x200012c72280 with size: 0.500549 MiB 00:06:02.039 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:02.039 element at address: 0x20001967c440 with size: 0.250549 MiB 00:06:02.039 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:02.039 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:02.039 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_58777 00:06:02.039 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:02.039 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58777 00:06:02.039 element at address: 0x200018af5ac0 with size: 0.031799 MiB 00:06:02.039 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:02.039 element at address: 0x200028064140 with size: 0.023804 MiB 00:06:02.039 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:02.039 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:02.039 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58777 00:06:02.039 element at address: 0x20002806a2c0 with size: 0.002502 MiB 00:06:02.039 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:02.039 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:02.039 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58777 00:06:02.039 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:02.039 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58777 00:06:02.039 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:02.039 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58777 00:06:02.039 element at address: 0x20002806ae00 with size: 0.000366 MiB 00:06:02.039 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:02.039 01:26:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:02.039 01:26:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58777 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 58777 ']' 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 58777 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58777 00:06:02.039 killing process with pid 58777 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58777' 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 58777 00:06:02.039 01:26:10 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 58777 00:06:03.943 ************************************ 00:06:03.943 END TEST dpdk_mem_utility 00:06:03.943 ************************************ 00:06:03.943 00:06:03.943 real 0m3.143s 00:06:03.943 user 0m3.232s 00:06:03.943 sys 0m0.473s 00:06:03.943 01:26:12 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:03.943 01:26:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:03.943 01:26:12 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.943 01:26:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:03.943 01:26:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.943 01:26:12 -- common/autotest_common.sh@10 -- # set +x 00:06:03.943 ************************************ 00:06:03.943 START TEST event 00:06:03.943 ************************************ 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:03.943 * Looking for test storage... 00:06:03.943 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:03.943 01:26:12 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:03.943 01:26:12 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:03.943 01:26:12 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:03.943 01:26:12 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:03.943 01:26:12 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:03.943 01:26:12 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:03.943 01:26:12 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:03.943 01:26:12 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:03.943 01:26:12 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:03.943 01:26:12 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:03.943 01:26:12 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:03.943 01:26:12 event -- scripts/common.sh@344 -- # case "$op" in 00:06:03.943 01:26:12 event -- scripts/common.sh@345 -- # : 1 00:06:03.943 01:26:12 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:03.943 01:26:12 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:03.943 01:26:12 event -- scripts/common.sh@365 -- # decimal 1 00:06:03.943 01:26:12 event -- scripts/common.sh@353 -- # local d=1 00:06:03.943 01:26:12 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:03.943 01:26:12 event -- scripts/common.sh@355 -- # echo 1 00:06:03.943 01:26:12 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:03.943 01:26:12 event -- scripts/common.sh@366 -- # decimal 2 00:06:03.943 01:26:12 event -- scripts/common.sh@353 -- # local d=2 00:06:03.943 01:26:12 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:03.943 01:26:12 event -- scripts/common.sh@355 -- # echo 2 00:06:03.943 01:26:12 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:03.943 01:26:12 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:03.943 01:26:12 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:03.943 01:26:12 event -- scripts/common.sh@368 -- # return 0 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.943 --rc genhtml_branch_coverage=1 00:06:03.943 --rc genhtml_function_coverage=1 00:06:03.943 --rc genhtml_legend=1 00:06:03.943 --rc geninfo_all_blocks=1 00:06:03.943 --rc geninfo_unexecuted_blocks=1 00:06:03.943 00:06:03.943 ' 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.943 --rc genhtml_branch_coverage=1 00:06:03.943 --rc genhtml_function_coverage=1 00:06:03.943 --rc genhtml_legend=1 00:06:03.943 --rc geninfo_all_blocks=1 00:06:03.943 --rc geninfo_unexecuted_blocks=1 00:06:03.943 00:06:03.943 ' 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.943 --rc genhtml_branch_coverage=1 00:06:03.943 --rc genhtml_function_coverage=1 00:06:03.943 --rc genhtml_legend=1 00:06:03.943 --rc geninfo_all_blocks=1 00:06:03.943 --rc geninfo_unexecuted_blocks=1 00:06:03.943 00:06:03.943 ' 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:03.943 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:03.943 --rc genhtml_branch_coverage=1 00:06:03.943 --rc genhtml_function_coverage=1 00:06:03.943 --rc genhtml_legend=1 00:06:03.943 --rc geninfo_all_blocks=1 00:06:03.943 --rc geninfo_unexecuted_blocks=1 00:06:03.943 00:06:03.943 ' 00:06:03.943 01:26:12 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:03.943 01:26:12 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:03.943 01:26:12 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:03.943 01:26:12 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:03.943 01:26:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:03.943 ************************************ 00:06:03.943 START TEST event_perf 00:06:03.943 ************************************ 00:06:03.943 01:26:12 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:04.203 Running I/O for 1 seconds...[2024-11-17 01:26:12.432847] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:04.203 [2024-11-17 01:26:12.433149] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58874 ] 00:06:04.203 [2024-11-17 01:26:12.613542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:04.462 [2024-11-17 01:26:12.709461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.462 [2024-11-17 01:26:12.709558] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:04.462 [2024-11-17 01:26:12.709659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.462 [2024-11-17 01:26:12.709672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.841 Running I/O for 1 seconds... 00:06:05.841 lcore 0: 191823 00:06:05.841 lcore 1: 191823 00:06:05.841 lcore 2: 191824 00:06:05.841 lcore 3: 191823 00:06:05.841 done. 00:06:05.841 ************************************ 00:06:05.841 END TEST event_perf 00:06:05.841 ************************************ 00:06:05.841 00:06:05.841 real 0m1.537s 00:06:05.841 user 0m4.308s 00:06:05.841 sys 0m0.103s 00:06:05.841 01:26:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.841 01:26:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:05.841 01:26:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.841 01:26:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:05.841 01:26:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.841 01:26:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:05.841 ************************************ 00:06:05.841 START TEST event_reactor 00:06:05.841 ************************************ 00:06:05.841 01:26:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:05.841 [2024-11-17 01:26:14.008340] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:05.841 [2024-11-17 01:26:14.008461] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58919 ] 00:06:05.841 [2024-11-17 01:26:14.169715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.841 [2024-11-17 01:26:14.249238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.219 test_start 00:06:07.219 oneshot 00:06:07.219 tick 100 00:06:07.219 tick 100 00:06:07.219 tick 250 00:06:07.219 tick 100 00:06:07.219 tick 100 00:06:07.219 tick 100 00:06:07.219 tick 250 00:06:07.219 tick 500 00:06:07.219 tick 100 00:06:07.220 tick 100 00:06:07.220 tick 250 00:06:07.220 tick 100 00:06:07.220 tick 100 00:06:07.220 test_end 00:06:07.220 00:06:07.220 real 0m1.460s 00:06:07.220 user 0m1.285s 00:06:07.220 sys 0m0.068s 00:06:07.220 01:26:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:07.220 01:26:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:07.220 ************************************ 00:06:07.220 END TEST event_reactor 00:06:07.220 ************************************ 00:06:07.220 01:26:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.220 01:26:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:07.220 01:26:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:07.220 01:26:15 event -- common/autotest_common.sh@10 -- # set +x 00:06:07.220 ************************************ 00:06:07.220 START TEST event_reactor_perf 00:06:07.220 ************************************ 00:06:07.220 01:26:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:07.220 [2024-11-17 01:26:15.536212] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:07.220 [2024-11-17 01:26:15.536383] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58950 ] 00:06:07.478 [2024-11-17 01:26:15.712494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.478 [2024-11-17 01:26:15.792281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.855 test_start 00:06:08.855 test_end 00:06:08.855 Performance: 331501 events per second 00:06:08.855 00:06:08.855 real 0m1.499s 00:06:08.855 user 0m1.300s 00:06:08.855 sys 0m0.091s 00:06:08.855 01:26:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:08.855 01:26:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:08.855 ************************************ 00:06:08.855 END TEST event_reactor_perf 00:06:08.855 ************************************ 00:06:08.855 01:26:17 event -- event/event.sh@49 -- # uname -s 00:06:08.855 01:26:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:08.855 01:26:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.855 01:26:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:08.855 01:26:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:08.855 01:26:17 event -- common/autotest_common.sh@10 -- # set +x 00:06:08.855 ************************************ 00:06:08.855 START TEST event_scheduler 00:06:08.855 ************************************ 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:08.855 * Looking for test storage... 00:06:08.855 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:08.855 01:26:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:08.855 01:26:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:08.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.855 --rc genhtml_branch_coverage=1 00:06:08.855 --rc genhtml_function_coverage=1 00:06:08.856 --rc genhtml_legend=1 00:06:08.856 --rc geninfo_all_blocks=1 00:06:08.856 --rc geninfo_unexecuted_blocks=1 00:06:08.856 00:06:08.856 ' 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:08.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.856 --rc genhtml_branch_coverage=1 00:06:08.856 --rc genhtml_function_coverage=1 00:06:08.856 --rc genhtml_legend=1 00:06:08.856 --rc geninfo_all_blocks=1 00:06:08.856 --rc geninfo_unexecuted_blocks=1 00:06:08.856 00:06:08.856 ' 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:08.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.856 --rc genhtml_branch_coverage=1 00:06:08.856 --rc genhtml_function_coverage=1 00:06:08.856 --rc genhtml_legend=1 00:06:08.856 --rc geninfo_all_blocks=1 00:06:08.856 --rc geninfo_unexecuted_blocks=1 00:06:08.856 00:06:08.856 ' 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:08.856 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:08.856 --rc genhtml_branch_coverage=1 00:06:08.856 --rc genhtml_function_coverage=1 00:06:08.856 --rc genhtml_legend=1 00:06:08.856 --rc geninfo_all_blocks=1 00:06:08.856 --rc geninfo_unexecuted_blocks=1 00:06:08.856 00:06:08.856 ' 00:06:08.856 01:26:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:08.856 01:26:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=59027 00:06:08.856 01:26:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:08.856 01:26:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 59027 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 59027 ']' 00:06:08.856 01:26:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:08.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:08.856 01:26:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:09.115 [2024-11-17 01:26:17.340368] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:09.115 [2024-11-17 01:26:17.340540] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59027 ] 00:06:09.116 [2024-11-17 01:26:17.529302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:09.376 [2024-11-17 01:26:17.662268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.376 [2024-11-17 01:26:17.662415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:09.376 [2024-11-17 01:26:17.662569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:09.376 [2024-11-17 01:26:17.662577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:10.016 01:26:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.016 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.016 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.016 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.016 POWER: Cannot set governor of lcore 0 to performance 00:06:10.016 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.016 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.016 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:10.016 POWER: Cannot set governor of lcore 0 to userspace 00:06:10.016 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:10.016 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:10.016 POWER: Unable to set Power Management Environment for lcore 0 00:06:10.016 [2024-11-17 01:26:18.373779] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:06:10.016 [2024-11-17 01:26:18.374103] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:06:10.016 [2024-11-17 01:26:18.374136] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:10.016 [2024-11-17 01:26:18.374225] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:10.016 [2024-11-17 01:26:18.374353] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:10.016 [2024-11-17 01:26:18.374497] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.016 01:26:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.016 01:26:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.275 [2024-11-17 01:26:18.532736] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:10.275 [2024-11-17 01:26:18.611777] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:10.275 01:26:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.275 01:26:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:10.275 01:26:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.275 01:26:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 ************************************ 00:06:10.276 START TEST scheduler_create_thread 00:06:10.276 ************************************ 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 2 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 3 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 4 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 5 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 6 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 7 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 8 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 9 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 10 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.276 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:10.535 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:10.535 01:26:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:10.535 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:10.535 01:26:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:11.913 01:26:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.913 01:26:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:11.913 01:26:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:11.913 01:26:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.913 01:26:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.851 01:26:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:12.851 00:06:12.851 real 0m2.620s 00:06:12.851 user 0m0.021s 00:06:12.851 sys 0m0.005s 00:06:12.851 01:26:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.851 01:26:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:12.851 ************************************ 00:06:12.851 END TEST scheduler_create_thread 00:06:12.851 ************************************ 00:06:12.851 01:26:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:12.851 01:26:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 59027 00:06:12.851 01:26:21 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 59027 ']' 00:06:12.851 01:26:21 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 59027 00:06:12.851 01:26:21 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:12.851 01:26:21 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.851 01:26:21 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59027 00:06:13.110 01:26:21 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:13.110 01:26:21 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:13.110 01:26:21 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59027' 00:06:13.110 killing process with pid 59027 00:06:13.110 01:26:21 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 59027 00:06:13.110 01:26:21 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 59027 00:06:13.369 [2024-11-17 01:26:21.724122] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:14.307 00:06:14.307 real 0m5.564s 00:06:14.307 user 0m10.043s 00:06:14.307 sys 0m0.465s 00:06:14.307 01:26:22 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:14.307 01:26:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:14.307 ************************************ 00:06:14.307 END TEST event_scheduler 00:06:14.307 ************************************ 00:06:14.307 01:26:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:14.307 01:26:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:14.307 01:26:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:14.307 01:26:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:14.307 01:26:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.307 ************************************ 00:06:14.307 START TEST app_repeat 00:06:14.307 ************************************ 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=59133 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:14.307 Process app_repeat pid: 59133 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 59133' 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:14.307 spdk_app_start Round 0 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:14.307 01:26:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59133 /var/tmp/spdk-nbd.sock 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59133 ']' 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:14.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:14.307 01:26:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:14.307 [2024-11-17 01:26:22.739994] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:14.307 [2024-11-17 01:26:22.740176] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59133 ] 00:06:14.567 [2024-11-17 01:26:22.919515] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:14.567 [2024-11-17 01:26:23.009394] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.567 [2024-11-17 01:26:23.009401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.827 [2024-11-17 01:26:23.168554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:15.395 01:26:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.395 01:26:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:15.395 01:26:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.654 Malloc0 00:06:15.654 01:26:24 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:16.222 Malloc1 00:06:16.222 01:26:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.222 01:26:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:16.482 /dev/nbd0 00:06:16.482 01:26:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:16.482 01:26:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.482 1+0 records in 00:06:16.482 1+0 records out 00:06:16.482 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000373637 s, 11.0 MB/s 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.482 01:26:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.482 01:26:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.482 01:26:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.482 01:26:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:16.742 /dev/nbd1 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:16.742 1+0 records in 00:06:16.742 1+0 records out 00:06:16.742 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000237171 s, 17.3 MB/s 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:16.742 01:26:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.742 01:26:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:17.001 { 00:06:17.001 "nbd_device": "/dev/nbd0", 00:06:17.001 "bdev_name": "Malloc0" 00:06:17.001 }, 00:06:17.001 { 00:06:17.001 "nbd_device": "/dev/nbd1", 00:06:17.001 "bdev_name": "Malloc1" 00:06:17.001 } 00:06:17.001 ]' 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:17.001 { 00:06:17.001 "nbd_device": "/dev/nbd0", 00:06:17.001 "bdev_name": "Malloc0" 00:06:17.001 }, 00:06:17.001 { 00:06:17.001 "nbd_device": "/dev/nbd1", 00:06:17.001 "bdev_name": "Malloc1" 00:06:17.001 } 00:06:17.001 ]' 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:17.001 /dev/nbd1' 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:17.001 /dev/nbd1' 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:17.001 01:26:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:17.002 256+0 records in 00:06:17.002 256+0 records out 00:06:17.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00990419 s, 106 MB/s 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:17.002 256+0 records in 00:06:17.002 256+0 records out 00:06:17.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0270933 s, 38.7 MB/s 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:17.002 256+0 records in 00:06:17.002 256+0 records out 00:06:17.002 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293377 s, 35.7 MB/s 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.002 01:26:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:17.570 01:26:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:18.138 01:26:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:18.138 01:26:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:18.397 01:26:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:19.335 [2024-11-17 01:26:27.621585] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:19.335 [2024-11-17 01:26:27.702146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.335 [2024-11-17 01:26:27.702154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.594 [2024-11-17 01:26:27.842635] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:19.594 [2024-11-17 01:26:27.842763] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:19.594 [2024-11-17 01:26:27.842786] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:21.498 spdk_app_start Round 1 00:06:21.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:21.498 01:26:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:21.498 01:26:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:21.498 01:26:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59133 /var/tmp/spdk-nbd.sock 00:06:21.498 01:26:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59133 ']' 00:06:21.498 01:26:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:21.498 01:26:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.498 01:26:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:21.498 01:26:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.498 01:26:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:21.757 01:26:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.757 01:26:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:21.757 01:26:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.016 Malloc0 00:06:22.016 01:26:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:22.276 Malloc1 00:06:22.276 01:26:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.276 01:26:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:22.535 /dev/nbd0 00:06:22.535 01:26:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:22.535 01:26:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:22.535 1+0 records in 00:06:22.535 1+0 records out 00:06:22.535 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264383 s, 15.5 MB/s 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:22.535 01:26:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:22.535 01:26:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:22.535 01:26:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:22.535 01:26:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:22.794 /dev/nbd1 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:23.053 1+0 records in 00:06:23.053 1+0 records out 00:06:23.053 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000390234 s, 10.5 MB/s 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:23.053 01:26:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.053 01:26:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:23.312 { 00:06:23.312 "nbd_device": "/dev/nbd0", 00:06:23.312 "bdev_name": "Malloc0" 00:06:23.312 }, 00:06:23.312 { 00:06:23.312 "nbd_device": "/dev/nbd1", 00:06:23.312 "bdev_name": "Malloc1" 00:06:23.312 } 00:06:23.312 ]' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:23.312 { 00:06:23.312 "nbd_device": "/dev/nbd0", 00:06:23.312 "bdev_name": "Malloc0" 00:06:23.312 }, 00:06:23.312 { 00:06:23.312 "nbd_device": "/dev/nbd1", 00:06:23.312 "bdev_name": "Malloc1" 00:06:23.312 } 00:06:23.312 ]' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:23.312 /dev/nbd1' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:23.312 /dev/nbd1' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:23.312 256+0 records in 00:06:23.312 256+0 records out 00:06:23.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00839221 s, 125 MB/s 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:23.312 256+0 records in 00:06:23.312 256+0 records out 00:06:23.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0212662 s, 49.3 MB/s 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:23.312 256+0 records in 00:06:23.312 256+0 records out 00:06:23.312 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0299193 s, 35.0 MB/s 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.312 01:26:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:23.570 01:26:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.571 01:26:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.571 01:26:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:23.571 01:26:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:23.829 01:26:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:23.830 01:26:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.830 01:26:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:24.088 01:26:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:24.088 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:24.088 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:24.347 01:26:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:24.347 01:26:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:24.606 01:26:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:25.542 [2024-11-17 01:26:33.803444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:25.542 [2024-11-17 01:26:33.883085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:25.542 [2024-11-17 01:26:33.883090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.801 [2024-11-17 01:26:34.027659] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:25.801 [2024-11-17 01:26:34.027805] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:25.801 [2024-11-17 01:26:34.027833] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:27.703 spdk_app_start Round 2 00:06:27.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:27.703 01:26:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:27.703 01:26:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:27.703 01:26:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 59133 /var/tmp/spdk-nbd.sock 00:06:27.703 01:26:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59133 ']' 00:06:27.703 01:26:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:27.703 01:26:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:27.703 01:26:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:27.703 01:26:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:27.703 01:26:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:27.961 01:26:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.961 01:26:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:27.961 01:26:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.219 Malloc0 00:06:28.220 01:26:36 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:28.478 Malloc1 00:06:28.478 01:26:36 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.478 01:26:36 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:28.737 /dev/nbd0 00:06:28.737 01:26:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:28.737 01:26:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.737 1+0 records in 00:06:28.737 1+0 records out 00:06:28.737 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000385088 s, 10.6 MB/s 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:28.737 01:26:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:28.737 01:26:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:28.737 01:26:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:28.737 01:26:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:28.996 /dev/nbd1 00:06:28.996 01:26:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:28.996 01:26:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:28.996 1+0 records in 00:06:28.996 1+0 records out 00:06:28.996 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350394 s, 11.7 MB/s 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:28.996 01:26:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:29.255 01:26:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:29.255 01:26:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:29.255 01:26:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:29.255 01:26:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:29.255 01:26:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:29.255 01:26:37 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.255 01:26:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:29.255 01:26:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:29.255 { 00:06:29.255 "nbd_device": "/dev/nbd0", 00:06:29.255 "bdev_name": "Malloc0" 00:06:29.256 }, 00:06:29.256 { 00:06:29.256 "nbd_device": "/dev/nbd1", 00:06:29.256 "bdev_name": "Malloc1" 00:06:29.256 } 00:06:29.256 ]' 00:06:29.256 01:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:29.256 { 00:06:29.256 "nbd_device": "/dev/nbd0", 00:06:29.256 "bdev_name": "Malloc0" 00:06:29.256 }, 00:06:29.256 { 00:06:29.256 "nbd_device": "/dev/nbd1", 00:06:29.256 "bdev_name": "Malloc1" 00:06:29.256 } 00:06:29.256 ]' 00:06:29.256 01:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:29.514 01:26:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:29.514 /dev/nbd1' 00:06:29.514 01:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:29.514 /dev/nbd1' 00:06:29.514 01:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:29.514 01:26:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:29.515 256+0 records in 00:06:29.515 256+0 records out 00:06:29.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105054 s, 99.8 MB/s 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:29.515 256+0 records in 00:06:29.515 256+0 records out 00:06:29.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0293763 s, 35.7 MB/s 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:29.515 256+0 records in 00:06:29.515 256+0 records out 00:06:29.515 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.03003 s, 34.9 MB/s 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.515 01:26:37 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:29.773 01:26:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.032 01:26:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:30.290 01:26:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:30.290 01:26:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:30.290 01:26:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:30.550 01:26:38 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:30.550 01:26:38 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:30.809 01:26:39 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:31.746 [2024-11-17 01:26:39.995071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:31.746 [2024-11-17 01:26:40.087346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.746 [2024-11-17 01:26:40.087356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.005 [2024-11-17 01:26:40.236453] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:32.005 [2024-11-17 01:26:40.236563] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.005 [2024-11-17 01:26:40.236584] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:33.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:33.927 01:26:42 event.app_repeat -- event/event.sh@38 -- # waitforlisten 59133 /var/tmp/spdk-nbd.sock 00:06:33.927 01:26:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 59133 ']' 00:06:33.927 01:26:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:33.927 01:26:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.927 01:26:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:33.927 01:26:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.927 01:26:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:34.186 01:26:42 event.app_repeat -- event/event.sh@39 -- # killprocess 59133 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 59133 ']' 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 59133 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59133 00:06:34.186 killing process with pid 59133 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59133' 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@973 -- # kill 59133 00:06:34.186 01:26:42 event.app_repeat -- common/autotest_common.sh@978 -- # wait 59133 00:06:34.754 spdk_app_start is called in Round 0. 00:06:34.754 Shutdown signal received, stop current app iteration 00:06:34.754 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:34.754 spdk_app_start is called in Round 1. 00:06:34.754 Shutdown signal received, stop current app iteration 00:06:34.754 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:34.754 spdk_app_start is called in Round 2. 00:06:34.754 Shutdown signal received, stop current app iteration 00:06:34.754 Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 reinitialization... 00:06:34.754 spdk_app_start is called in Round 3. 00:06:34.754 Shutdown signal received, stop current app iteration 00:06:35.013 01:26:43 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:35.013 01:26:43 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:35.013 00:06:35.013 real 0m20.557s 00:06:35.013 user 0m45.912s 00:06:35.013 sys 0m2.589s 00:06:35.013 01:26:43 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:35.013 01:26:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 ************************************ 00:06:35.013 END TEST app_repeat 00:06:35.013 ************************************ 00:06:35.013 01:26:43 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:35.013 01:26:43 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.013 01:26:43 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.013 01:26:43 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.013 01:26:43 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.013 ************************************ 00:06:35.013 START TEST cpu_locks 00:06:35.013 ************************************ 00:06:35.013 01:26:43 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:35.013 * Looking for test storage... 00:06:35.013 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:35.013 01:26:43 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:35.013 01:26:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:06:35.013 01:26:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:35.013 01:26:43 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:35.013 01:26:43 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:35.014 01:26:43 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:35.273 01:26:43 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:35.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.273 --rc genhtml_branch_coverage=1 00:06:35.273 --rc genhtml_function_coverage=1 00:06:35.273 --rc genhtml_legend=1 00:06:35.273 --rc geninfo_all_blocks=1 00:06:35.273 --rc geninfo_unexecuted_blocks=1 00:06:35.273 00:06:35.273 ' 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:35.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.273 --rc genhtml_branch_coverage=1 00:06:35.273 --rc genhtml_function_coverage=1 00:06:35.273 --rc genhtml_legend=1 00:06:35.273 --rc geninfo_all_blocks=1 00:06:35.273 --rc geninfo_unexecuted_blocks=1 00:06:35.273 00:06:35.273 ' 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:35.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.273 --rc genhtml_branch_coverage=1 00:06:35.273 --rc genhtml_function_coverage=1 00:06:35.273 --rc genhtml_legend=1 00:06:35.273 --rc geninfo_all_blocks=1 00:06:35.273 --rc geninfo_unexecuted_blocks=1 00:06:35.273 00:06:35.273 ' 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:35.273 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:35.273 --rc genhtml_branch_coverage=1 00:06:35.273 --rc genhtml_function_coverage=1 00:06:35.273 --rc genhtml_legend=1 00:06:35.273 --rc geninfo_all_blocks=1 00:06:35.273 --rc geninfo_unexecuted_blocks=1 00:06:35.273 00:06:35.273 ' 00:06:35.273 01:26:43 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:35.273 01:26:43 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:35.273 01:26:43 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:35.273 01:26:43 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:35.273 01:26:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.273 ************************************ 00:06:35.273 START TEST default_locks 00:06:35.273 ************************************ 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=59597 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 59597 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59597 ']' 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.273 01:26:43 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.273 [2024-11-17 01:26:43.617493] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:35.273 [2024-11-17 01:26:43.618272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59597 ] 00:06:35.532 [2024-11-17 01:26:43.795668] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.532 [2024-11-17 01:26:43.877443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.791 [2024-11-17 01:26:44.067500] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:36.359 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:36.359 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:36.359 01:26:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 59597 00:06:36.359 01:26:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 59597 00:06:36.359 01:26:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.618 01:26:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 59597 00:06:36.618 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 59597 ']' 00:06:36.618 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 59597 00:06:36.618 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:36.618 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:36.618 01:26:44 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59597 00:06:36.618 01:26:45 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:36.618 01:26:45 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:36.618 killing process with pid 59597 00:06:36.618 01:26:45 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59597' 00:06:36.618 01:26:45 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 59597 00:06:36.618 01:26:45 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 59597 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 59597 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59597 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 59597 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 59597 ']' 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59597) - No such process 00:06:38.524 ERROR: process (pid: 59597) is no longer running 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:38.524 00:06:38.524 real 0m3.251s 00:06:38.524 user 0m3.433s 00:06:38.524 sys 0m0.548s 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.524 01:26:46 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 ************************************ 00:06:38.524 END TEST default_locks 00:06:38.524 ************************************ 00:06:38.524 01:26:46 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:38.524 01:26:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:38.524 01:26:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.524 01:26:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 ************************************ 00:06:38.524 START TEST default_locks_via_rpc 00:06:38.524 ************************************ 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=59661 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 59661 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59661 ']' 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.524 01:26:46 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.524 [2024-11-17 01:26:46.888960] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:38.524 [2024-11-17 01:26:46.889121] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59661 ] 00:06:38.783 [2024-11-17 01:26:47.048923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.783 [2024-11-17 01:26:47.144433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.043 [2024-11-17 01:26:47.372456] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 59661 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 59661 00:06:39.611 01:26:47 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 59661 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 59661 ']' 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 59661 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59661 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.179 killing process with pid 59661 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59661' 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 59661 00:06:40.179 01:26:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 59661 00:06:42.084 00:06:42.084 real 0m3.329s 00:06:42.084 user 0m3.508s 00:06:42.084 sys 0m0.587s 00:06:42.084 01:26:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:42.084 01:26:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:42.084 ************************************ 00:06:42.084 END TEST default_locks_via_rpc 00:06:42.084 ************************************ 00:06:42.084 01:26:50 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:42.084 01:26:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:42.084 01:26:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:42.084 01:26:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:42.084 ************************************ 00:06:42.084 START TEST non_locking_app_on_locked_coremask 00:06:42.084 ************************************ 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59724 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59724 /var/tmp/spdk.sock 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59724 ']' 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.084 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.085 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.085 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.085 01:26:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.085 [2024-11-17 01:26:50.316148] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:42.085 [2024-11-17 01:26:50.316352] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59724 ] 00:06:42.085 [2024-11-17 01:26:50.494133] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.343 [2024-11-17 01:26:50.602127] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.602 [2024-11-17 01:26:50.837135] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59745 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59745 /var/tmp/spdk2.sock 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59745 ']' 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.861 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:42.861 01:26:51 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:43.121 [2024-11-17 01:26:51.430216] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:43.121 [2024-11-17 01:26:51.430397] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59745 ] 00:06:43.381 [2024-11-17 01:26:51.618710] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:43.381 [2024-11-17 01:26:51.618768] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.381 [2024-11-17 01:26:51.801105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.950 [2024-11-17 01:26:52.230125] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:44.889 01:26:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.889 01:26:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:44.889 01:26:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59724 00:06:44.889 01:26:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59724 00:06:44.889 01:26:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59724 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59724 ']' 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59724 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59724 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:45.826 killing process with pid 59724 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59724' 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59724 00:06:45.826 01:26:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59724 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59745 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59745 ']' 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59745 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59745 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.016 killing process with pid 59745 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59745' 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59745 00:06:50.016 01:26:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59745 00:06:51.919 00:06:51.919 real 0m9.774s 00:06:51.919 user 0m10.406s 00:06:51.919 sys 0m1.249s 00:06:51.919 01:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.919 01:26:59 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.919 ************************************ 00:06:51.919 END TEST non_locking_app_on_locked_coremask 00:06:51.919 ************************************ 00:06:51.919 01:26:59 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:51.919 01:26:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:51.919 01:26:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.919 01:26:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.919 ************************************ 00:06:51.919 START TEST locking_app_on_unlocked_coremask 00:06:51.919 ************************************ 00:06:51.919 01:26:59 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59877 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59877 /var/tmp/spdk.sock 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59877 ']' 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:51.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:51.919 01:27:00 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.919 [2024-11-17 01:27:00.141106] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:51.919 [2024-11-17 01:27:00.141330] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59877 ] 00:06:51.919 [2024-11-17 01:27:00.317101] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:51.919 [2024-11-17 01:27:00.317187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:52.179 [2024-11-17 01:27:00.424739] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.437 [2024-11-17 01:27:00.665185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59893 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59893 /var/tmp/spdk2.sock 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59893 ']' 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:53.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:53.005 01:27:01 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:53.005 [2024-11-17 01:27:01.393093] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:06:53.005 [2024-11-17 01:27:01.393762] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59893 ] 00:06:53.265 [2024-11-17 01:27:01.601625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.524 [2024-11-17 01:27:01.829485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.091 [2024-11-17 01:27:02.321938] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:06:55.998 01:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:55.998 01:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:55.998 01:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59893 00:06:55.998 01:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59893 00:06:55.998 01:27:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:56.566 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59877 00:06:56.566 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59877 ']' 00:06:56.566 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59877 00:06:56.566 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:56.566 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:56.826 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59877 00:06:56.826 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:56.826 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:56.826 killing process with pid 59877 00:06:56.826 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59877' 00:06:56.826 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59877 00:06:56.826 01:27:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59877 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59893 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59893 ']' 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 59893 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59893 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.019 killing process with pid 59893 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59893' 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 59893 00:07:01.019 01:27:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 59893 00:07:02.923 00:07:02.923 real 0m10.947s 00:07:02.923 user 0m11.838s 00:07:02.923 sys 0m1.280s 00:07:02.923 01:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.923 01:27:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.923 ************************************ 00:07:02.923 END TEST locking_app_on_unlocked_coremask 00:07:02.923 ************************************ 00:07:02.923 01:27:10 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:02.923 01:27:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.923 01:27:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.923 01:27:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.923 ************************************ 00:07:02.923 START TEST locking_app_on_locked_coremask 00:07:02.923 ************************************ 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60040 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60040 /var/tmp/spdk.sock 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60040 ']' 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:02.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:02.923 01:27:11 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.923 [2024-11-17 01:27:11.137459] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:02.923 [2024-11-17 01:27:11.137647] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60040 ] 00:07:02.923 [2024-11-17 01:27:11.314936] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.181 [2024-11-17 01:27:11.398372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.181 [2024-11-17 01:27:11.583668] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=60056 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 60056 /var/tmp/spdk2.sock 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60056 /var/tmp/spdk2.sock 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60056 /var/tmp/spdk2.sock 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60056 ']' 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:03.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:03.750 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:03.750 [2024-11-17 01:27:12.136644] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:03.750 [2024-11-17 01:27:12.136825] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60056 ] 00:07:04.040 [2024-11-17 01:27:12.316148] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60040 has claimed it. 00:07:04.040 [2024-11-17 01:27:12.316224] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:04.607 ERROR: process (pid: 60056) is no longer running 00:07:04.607 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60056) - No such process 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60040 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60040 00:07:04.607 01:27:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60040 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60040 ']' 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60040 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60040 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:04.866 killing process with pid 60040 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60040' 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60040 00:07:04.866 01:27:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60040 00:07:06.769 00:07:06.769 real 0m3.941s 00:07:06.769 user 0m4.330s 00:07:06.769 sys 0m0.696s 00:07:06.769 01:27:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.769 ************************************ 00:07:06.769 END TEST locking_app_on_locked_coremask 00:07:06.769 ************************************ 00:07:06.769 01:27:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.769 01:27:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:06.769 01:27:14 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.769 01:27:14 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.769 01:27:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.769 ************************************ 00:07:06.769 START TEST locking_overlapped_coremask 00:07:06.769 ************************************ 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=60115 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 60115 /var/tmp/spdk.sock 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60115 ']' 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.769 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.770 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.770 01:27:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:06.770 [2024-11-17 01:27:15.101547] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:06.770 [2024-11-17 01:27:15.101703] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60115 ] 00:07:07.028 [2024-11-17 01:27:15.266286] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:07.028 [2024-11-17 01:27:15.357732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.028 [2024-11-17 01:27:15.357897] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:07.028 [2024-11-17 01:27:15.357913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:07.287 [2024-11-17 01:27:15.573063] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=60140 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 60140 /var/tmp/spdk2.sock 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60140 /var/tmp/spdk2.sock 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 60140 /var/tmp/spdk2.sock 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 60140 ']' 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.855 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:07.855 [2024-11-17 01:27:16.229729] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:07.855 [2024-11-17 01:27:16.229922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60140 ] 00:07:08.113 [2024-11-17 01:27:16.428578] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60115 has claimed it. 00:07:08.113 [2024-11-17 01:27:16.428685] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:08.680 ERROR: process (pid: 60140) is no longer running 00:07:08.680 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60140) - No such process 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 60115 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 60115 ']' 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 60115 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.680 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60115 00:07:08.681 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:08.681 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:08.681 killing process with pid 60115 00:07:08.681 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60115' 00:07:08.681 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 60115 00:07:08.681 01:27:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 60115 00:07:10.584 00:07:10.584 real 0m3.756s 00:07:10.584 user 0m10.439s 00:07:10.584 sys 0m0.540s 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:10.584 ************************************ 00:07:10.584 END TEST locking_overlapped_coremask 00:07:10.584 ************************************ 00:07:10.584 01:27:18 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:10.584 01:27:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.584 01:27:18 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.584 01:27:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:10.584 ************************************ 00:07:10.584 START TEST locking_overlapped_coremask_via_rpc 00:07:10.584 ************************************ 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=60198 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 60198 /var/tmp/spdk.sock 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60198 ']' 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.584 01:27:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:10.584 [2024-11-17 01:27:18.946218] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:10.584 [2024-11-17 01:27:18.946401] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60198 ] 00:07:10.842 [2024-11-17 01:27:19.119678] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:10.842 [2024-11-17 01:27:19.120022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:10.842 [2024-11-17 01:27:19.210659] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.842 [2024-11-17 01:27:19.210806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.842 [2024-11-17 01:27:19.210848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:11.100 [2024-11-17 01:27:19.410658] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:11.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=60216 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 60216 /var/tmp/spdk2.sock 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60216 ']' 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.668 01:27:19 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.668 [2024-11-17 01:27:20.066743] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:11.668 [2024-11-17 01:27:20.067207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60216 ] 00:07:11.927 [2024-11-17 01:27:20.265494] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:11.927 [2024-11-17 01:27:20.265569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:12.186 [2024-11-17 01:27:20.458263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:12.186 [2024-11-17 01:27:20.458355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:12.186 [2024-11-17 01:27:20.458384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:12.444 [2024-11-17 01:27:20.884963] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.823 [2024-11-17 01:27:21.903996] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 60198 has claimed it. 00:07:13.823 request: 00:07:13.823 { 00:07:13.823 "method": "framework_enable_cpumask_locks", 00:07:13.823 "req_id": 1 00:07:13.823 } 00:07:13.823 Got JSON-RPC error response 00:07:13.823 response: 00:07:13.823 { 00:07:13.823 "code": -32603, 00:07:13.823 "message": "Failed to claim CPU core: 2" 00:07:13.823 } 00:07:13.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:13.823 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 60198 /var/tmp/spdk.sock 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60198 ']' 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.824 01:27:21 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 60216 /var/tmp/spdk2.sock 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60216 ']' 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:13.824 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:14.082 ************************************ 00:07:14.082 END TEST locking_overlapped_coremask_via_rpc 00:07:14.082 ************************************ 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:14.082 00:07:14.082 real 0m3.704s 00:07:14.082 user 0m1.460s 00:07:14.082 sys 0m0.202s 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.082 01:27:22 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:14.341 01:27:22 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:14.341 01:27:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60198 ]] 00:07:14.341 01:27:22 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60198 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60198 ']' 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60198 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60198 00:07:14.341 killing process with pid 60198 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60198' 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60198 00:07:14.341 01:27:22 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60198 00:07:16.246 01:27:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60216 ]] 00:07:16.246 01:27:24 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60216 00:07:16.246 01:27:24 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60216 ']' 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60216 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60216 00:07:16.247 killing process with pid 60216 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60216' 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 60216 00:07:16.247 01:27:24 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 60216 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.150 Process with pid 60198 is not found 00:07:18.150 Process with pid 60216 is not found 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 60198 ]] 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 60198 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60198 ']' 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60198 00:07:18.150 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60198) - No such process 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60198 is not found' 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 60216 ]] 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 60216 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 60216 ']' 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 60216 00:07:18.150 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (60216) - No such process 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 60216 is not found' 00:07:18.150 01:27:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:18.150 ************************************ 00:07:18.150 END TEST cpu_locks 00:07:18.150 ************************************ 00:07:18.150 00:07:18.150 real 0m43.093s 00:07:18.150 user 1m14.154s 00:07:18.150 sys 0m6.110s 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.150 01:27:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:18.150 ************************************ 00:07:18.150 END TEST event 00:07:18.150 ************************************ 00:07:18.150 00:07:18.150 real 1m14.232s 00:07:18.150 user 2m17.214s 00:07:18.150 sys 0m9.695s 00:07:18.150 01:27:26 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:18.150 01:27:26 event -- common/autotest_common.sh@10 -- # set +x 00:07:18.150 01:27:26 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:18.150 01:27:26 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:18.150 01:27:26 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.150 01:27:26 -- common/autotest_common.sh@10 -- # set +x 00:07:18.150 ************************************ 00:07:18.150 START TEST thread 00:07:18.150 ************************************ 00:07:18.150 01:27:26 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:18.150 * Looking for test storage... 00:07:18.150 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:18.150 01:27:26 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:18.150 01:27:26 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:18.150 01:27:26 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:18.410 01:27:26 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:18.410 01:27:26 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:18.410 01:27:26 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:18.410 01:27:26 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:18.410 01:27:26 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:18.410 01:27:26 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:18.410 01:27:26 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:18.410 01:27:26 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:18.410 01:27:26 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:18.410 01:27:26 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:18.410 01:27:26 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:18.410 01:27:26 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:18.410 01:27:26 thread -- scripts/common.sh@345 -- # : 1 00:07:18.410 01:27:26 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:18.410 01:27:26 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:18.410 01:27:26 thread -- scripts/common.sh@365 -- # decimal 1 00:07:18.410 01:27:26 thread -- scripts/common.sh@353 -- # local d=1 00:07:18.410 01:27:26 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:18.410 01:27:26 thread -- scripts/common.sh@355 -- # echo 1 00:07:18.410 01:27:26 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:18.410 01:27:26 thread -- scripts/common.sh@366 -- # decimal 2 00:07:18.410 01:27:26 thread -- scripts/common.sh@353 -- # local d=2 00:07:18.410 01:27:26 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:18.410 01:27:26 thread -- scripts/common.sh@355 -- # echo 2 00:07:18.410 01:27:26 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:18.410 01:27:26 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:18.410 01:27:26 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:18.410 01:27:26 thread -- scripts/common.sh@368 -- # return 0 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:18.410 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:18.410 --rc genhtml_branch_coverage=1 00:07:18.410 --rc genhtml_function_coverage=1 00:07:18.410 --rc genhtml_legend=1 00:07:18.410 --rc geninfo_all_blocks=1 00:07:18.410 --rc geninfo_unexecuted_blocks=1 00:07:18.410 00:07:18.410 ' 00:07:18.410 01:27:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:18.410 01:27:26 thread -- common/autotest_common.sh@10 -- # set +x 00:07:18.410 ************************************ 00:07:18.410 START TEST thread_poller_perf 00:07:18.410 ************************************ 00:07:18.410 01:27:26 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:18.410 [2024-11-17 01:27:26.711711] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:18.410 [2024-11-17 01:27:26.712075] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60393 ] 00:07:18.669 [2024-11-17 01:27:26.898706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.669 [2024-11-17 01:27:27.023913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.669 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:20.060 [2024-11-17T01:27:28.519Z] ====================================== 00:07:20.060 [2024-11-17T01:27:28.519Z] busy:2215395178 (cyc) 00:07:20.060 [2024-11-17T01:27:28.519Z] total_run_count: 345000 00:07:20.060 [2024-11-17T01:27:28.519Z] tsc_hz: 2200000000 (cyc) 00:07:20.060 [2024-11-17T01:27:28.519Z] ====================================== 00:07:20.060 [2024-11-17T01:27:28.519Z] poller_cost: 6421 (cyc), 2918 (nsec) 00:07:20.060 00:07:20.060 real 0m1.556s 00:07:20.060 user 0m1.358s 00:07:20.060 sys 0m0.089s 00:07:20.060 ************************************ 00:07:20.060 END TEST thread_poller_perf 00:07:20.060 ************************************ 00:07:20.060 01:27:28 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.060 01:27:28 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:20.060 01:27:28 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.060 01:27:28 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:20.060 01:27:28 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.060 01:27:28 thread -- common/autotest_common.sh@10 -- # set +x 00:07:20.060 ************************************ 00:07:20.060 START TEST thread_poller_perf 00:07:20.060 ************************************ 00:07:20.060 01:27:28 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:20.060 [2024-11-17 01:27:28.323765] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:20.060 [2024-11-17 01:27:28.323938] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60430 ] 00:07:20.060 [2024-11-17 01:27:28.502472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.321 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:20.321 [2024-11-17 01:27:28.584915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.699 [2024-11-17T01:27:30.158Z] ====================================== 00:07:21.699 [2024-11-17T01:27:30.158Z] busy:2203517176 (cyc) 00:07:21.699 [2024-11-17T01:27:30.158Z] total_run_count: 4670000 00:07:21.699 [2024-11-17T01:27:30.158Z] tsc_hz: 2200000000 (cyc) 00:07:21.699 [2024-11-17T01:27:30.158Z] ====================================== 00:07:21.699 [2024-11-17T01:27:30.158Z] poller_cost: 471 (cyc), 214 (nsec) 00:07:21.699 ************************************ 00:07:21.699 END TEST thread_poller_perf 00:07:21.699 ************************************ 00:07:21.699 00:07:21.699 real 0m1.490s 00:07:21.699 user 0m1.300s 00:07:21.699 sys 0m0.083s 00:07:21.699 01:27:29 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.699 01:27:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:21.699 01:27:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:21.699 ************************************ 00:07:21.699 END TEST thread 00:07:21.699 ************************************ 00:07:21.699 00:07:21.699 real 0m3.341s 00:07:21.699 user 0m2.800s 00:07:21.699 sys 0m0.317s 00:07:21.699 01:27:29 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:21.699 01:27:29 thread -- common/autotest_common.sh@10 -- # set +x 00:07:21.699 01:27:29 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:21.699 01:27:29 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:21.699 01:27:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:21.699 01:27:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:21.699 01:27:29 -- common/autotest_common.sh@10 -- # set +x 00:07:21.699 ************************************ 00:07:21.699 START TEST app_cmdline 00:07:21.699 ************************************ 00:07:21.699 01:27:29 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:21.699 * Looking for test storage... 00:07:21.699 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:21.699 01:27:29 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:21.699 01:27:29 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:21.699 01:27:29 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:21.699 01:27:30 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:21.699 01:27:30 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:21.699 01:27:30 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:21.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.699 01:27:30 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:21.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.699 --rc genhtml_branch_coverage=1 00:07:21.699 --rc genhtml_function_coverage=1 00:07:21.699 --rc genhtml_legend=1 00:07:21.699 --rc geninfo_all_blocks=1 00:07:21.699 --rc geninfo_unexecuted_blocks=1 00:07:21.699 00:07:21.699 ' 00:07:21.699 01:27:30 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:21.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.699 --rc genhtml_branch_coverage=1 00:07:21.699 --rc genhtml_function_coverage=1 00:07:21.699 --rc genhtml_legend=1 00:07:21.699 --rc geninfo_all_blocks=1 00:07:21.699 --rc geninfo_unexecuted_blocks=1 00:07:21.699 00:07:21.699 ' 00:07:21.699 01:27:30 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:21.699 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.699 --rc genhtml_branch_coverage=1 00:07:21.699 --rc genhtml_function_coverage=1 00:07:21.699 --rc genhtml_legend=1 00:07:21.699 --rc geninfo_all_blocks=1 00:07:21.699 --rc geninfo_unexecuted_blocks=1 00:07:21.699 00:07:21.699 ' 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:21.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:21.700 --rc genhtml_branch_coverage=1 00:07:21.700 --rc genhtml_function_coverage=1 00:07:21.700 --rc genhtml_legend=1 00:07:21.700 --rc geninfo_all_blocks=1 00:07:21.700 --rc geninfo_unexecuted_blocks=1 00:07:21.700 00:07:21.700 ' 00:07:21.700 01:27:30 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:21.700 01:27:30 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=60517 00:07:21.700 01:27:30 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:21.700 01:27:30 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 60517 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 60517 ']' 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.700 01:27:30 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:21.959 [2024-11-17 01:27:30.178000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:21.959 [2024-11-17 01:27:30.178486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60517 ] 00:07:21.959 [2024-11-17 01:27:30.357761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.218 [2024-11-17 01:27:30.442361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.218 [2024-11-17 01:27:30.621550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:22.786 01:27:31 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:22.786 01:27:31 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:22.786 01:27:31 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:23.045 { 00:07:23.045 "version": "SPDK v25.01-pre git sha1 83e8405e4", 00:07:23.045 "fields": { 00:07:23.045 "major": 25, 00:07:23.045 "minor": 1, 00:07:23.045 "patch": 0, 00:07:23.045 "suffix": "-pre", 00:07:23.045 "commit": "83e8405e4" 00:07:23.045 } 00:07:23.045 } 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:23.045 01:27:31 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.045 01:27:31 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:23.046 01:27:31 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:23.046 01:27:31 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:23.046 01:27:31 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:23.305 request: 00:07:23.305 { 00:07:23.305 "method": "env_dpdk_get_mem_stats", 00:07:23.305 "req_id": 1 00:07:23.305 } 00:07:23.305 Got JSON-RPC error response 00:07:23.305 response: 00:07:23.305 { 00:07:23.305 "code": -32601, 00:07:23.305 "message": "Method not found" 00:07:23.305 } 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:23.305 01:27:31 app_cmdline -- app/cmdline.sh@1 -- # killprocess 60517 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 60517 ']' 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 60517 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60517 00:07:23.305 killing process with pid 60517 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60517' 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@973 -- # kill 60517 00:07:23.305 01:27:31 app_cmdline -- common/autotest_common.sh@978 -- # wait 60517 00:07:25.209 00:07:25.209 real 0m3.556s 00:07:25.209 user 0m4.056s 00:07:25.210 sys 0m0.516s 00:07:25.210 ************************************ 00:07:25.210 END TEST app_cmdline 00:07:25.210 ************************************ 00:07:25.210 01:27:33 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.210 01:27:33 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 01:27:33 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.210 01:27:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.210 01:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.210 01:27:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.210 ************************************ 00:07:25.210 START TEST version 00:07:25.210 ************************************ 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:25.210 * Looking for test storage... 00:07:25.210 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.210 01:27:33 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.210 01:27:33 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.210 01:27:33 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.210 01:27:33 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.210 01:27:33 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.210 01:27:33 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.210 01:27:33 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.210 01:27:33 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.210 01:27:33 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.210 01:27:33 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.210 01:27:33 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.210 01:27:33 version -- scripts/common.sh@344 -- # case "$op" in 00:07:25.210 01:27:33 version -- scripts/common.sh@345 -- # : 1 00:07:25.210 01:27:33 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.210 01:27:33 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.210 01:27:33 version -- scripts/common.sh@365 -- # decimal 1 00:07:25.210 01:27:33 version -- scripts/common.sh@353 -- # local d=1 00:07:25.210 01:27:33 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.210 01:27:33 version -- scripts/common.sh@355 -- # echo 1 00:07:25.210 01:27:33 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.210 01:27:33 version -- scripts/common.sh@366 -- # decimal 2 00:07:25.210 01:27:33 version -- scripts/common.sh@353 -- # local d=2 00:07:25.210 01:27:33 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.210 01:27:33 version -- scripts/common.sh@355 -- # echo 2 00:07:25.210 01:27:33 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.210 01:27:33 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.210 01:27:33 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.210 01:27:33 version -- scripts/common.sh@368 -- # return 0 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.210 --rc genhtml_branch_coverage=1 00:07:25.210 --rc genhtml_function_coverage=1 00:07:25.210 --rc genhtml_legend=1 00:07:25.210 --rc geninfo_all_blocks=1 00:07:25.210 --rc geninfo_unexecuted_blocks=1 00:07:25.210 00:07:25.210 ' 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.210 --rc genhtml_branch_coverage=1 00:07:25.210 --rc genhtml_function_coverage=1 00:07:25.210 --rc genhtml_legend=1 00:07:25.210 --rc geninfo_all_blocks=1 00:07:25.210 --rc geninfo_unexecuted_blocks=1 00:07:25.210 00:07:25.210 ' 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.210 --rc genhtml_branch_coverage=1 00:07:25.210 --rc genhtml_function_coverage=1 00:07:25.210 --rc genhtml_legend=1 00:07:25.210 --rc geninfo_all_blocks=1 00:07:25.210 --rc geninfo_unexecuted_blocks=1 00:07:25.210 00:07:25.210 ' 00:07:25.210 01:27:33 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.210 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.210 --rc genhtml_branch_coverage=1 00:07:25.210 --rc genhtml_function_coverage=1 00:07:25.210 --rc genhtml_legend=1 00:07:25.210 --rc geninfo_all_blocks=1 00:07:25.210 --rc geninfo_unexecuted_blocks=1 00:07:25.210 00:07:25.210 ' 00:07:25.210 01:27:33 version -- app/version.sh@17 -- # get_header_version major 00:07:25.210 01:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.210 01:27:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.210 01:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.210 01:27:33 version -- app/version.sh@17 -- # major=25 00:07:25.210 01:27:33 version -- app/version.sh@18 -- # get_header_version minor 00:07:25.210 01:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.210 01:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.210 01:27:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.469 01:27:33 version -- app/version.sh@18 -- # minor=1 00:07:25.469 01:27:33 version -- app/version.sh@19 -- # get_header_version patch 00:07:25.469 01:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.469 01:27:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.469 01:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.469 01:27:33 version -- app/version.sh@19 -- # patch=0 00:07:25.469 01:27:33 version -- app/version.sh@20 -- # get_header_version suffix 00:07:25.469 01:27:33 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:25.469 01:27:33 version -- app/version.sh@14 -- # cut -f2 00:07:25.469 01:27:33 version -- app/version.sh@14 -- # tr -d '"' 00:07:25.469 01:27:33 version -- app/version.sh@20 -- # suffix=-pre 00:07:25.469 01:27:33 version -- app/version.sh@22 -- # version=25.1 00:07:25.469 01:27:33 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:25.469 01:27:33 version -- app/version.sh@28 -- # version=25.1rc0 00:07:25.469 01:27:33 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:25.469 01:27:33 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:25.469 01:27:33 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:25.469 01:27:33 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:25.469 00:07:25.469 real 0m0.255s 00:07:25.469 user 0m0.169s 00:07:25.469 sys 0m0.121s 00:07:25.469 ************************************ 00:07:25.469 END TEST version 00:07:25.469 ************************************ 00:07:25.469 01:27:33 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:25.469 01:27:33 version -- common/autotest_common.sh@10 -- # set +x 00:07:25.469 01:27:33 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:25.469 01:27:33 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:25.469 01:27:33 -- spdk/autotest.sh@194 -- # uname -s 00:07:25.469 01:27:33 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:25.469 01:27:33 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:25.469 01:27:33 -- spdk/autotest.sh@195 -- # [[ 1 -eq 1 ]] 00:07:25.469 01:27:33 -- spdk/autotest.sh@201 -- # [[ 0 -eq 0 ]] 00:07:25.470 01:27:33 -- spdk/autotest.sh@202 -- # run_test spdk_dd /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:25.470 01:27:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:25.470 01:27:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.470 01:27:33 -- common/autotest_common.sh@10 -- # set +x 00:07:25.470 ************************************ 00:07:25.470 START TEST spdk_dd 00:07:25.470 ************************************ 00:07:25.470 01:27:33 spdk_dd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/dd.sh 00:07:25.470 * Looking for test storage... 00:07:25.470 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:25.470 01:27:33 spdk_dd -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:25.470 01:27:33 spdk_dd -- common/autotest_common.sh@1693 -- # lcov --version 00:07:25.470 01:27:33 spdk_dd -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:25.470 01:27:33 spdk_dd -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@336 -- # IFS=.-: 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@336 -- # read -ra ver1 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@337 -- # IFS=.-: 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@337 -- # read -ra ver2 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@338 -- # local 'op=<' 00:07:25.728 01:27:33 spdk_dd -- scripts/common.sh@340 -- # ver1_l=2 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@341 -- # ver2_l=1 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@344 -- # case "$op" in 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@345 -- # : 1 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@365 -- # decimal 1 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@353 -- # local d=1 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@355 -- # echo 1 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@365 -- # ver1[v]=1 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@366 -- # decimal 2 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@353 -- # local d=2 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@355 -- # echo 2 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@366 -- # ver2[v]=2 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@368 -- # return 0 00:07:25.729 01:27:33 spdk_dd -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:25.729 01:27:33 spdk_dd -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.729 --rc genhtml_branch_coverage=1 00:07:25.729 --rc genhtml_function_coverage=1 00:07:25.729 --rc genhtml_legend=1 00:07:25.729 --rc geninfo_all_blocks=1 00:07:25.729 --rc geninfo_unexecuted_blocks=1 00:07:25.729 00:07:25.729 ' 00:07:25.729 01:27:33 spdk_dd -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.729 --rc genhtml_branch_coverage=1 00:07:25.729 --rc genhtml_function_coverage=1 00:07:25.729 --rc genhtml_legend=1 00:07:25.729 --rc geninfo_all_blocks=1 00:07:25.729 --rc geninfo_unexecuted_blocks=1 00:07:25.729 00:07:25.729 ' 00:07:25.729 01:27:33 spdk_dd -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.729 --rc genhtml_branch_coverage=1 00:07:25.729 --rc genhtml_function_coverage=1 00:07:25.729 --rc genhtml_legend=1 00:07:25.729 --rc geninfo_all_blocks=1 00:07:25.729 --rc geninfo_unexecuted_blocks=1 00:07:25.729 00:07:25.729 ' 00:07:25.729 01:27:33 spdk_dd -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:25.729 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:25.729 --rc genhtml_branch_coverage=1 00:07:25.729 --rc genhtml_function_coverage=1 00:07:25.729 --rc genhtml_legend=1 00:07:25.729 --rc geninfo_all_blocks=1 00:07:25.729 --rc geninfo_unexecuted_blocks=1 00:07:25.729 00:07:25.729 ' 00:07:25.729 01:27:33 spdk_dd -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@15 -- # shopt -s extglob 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:25.729 01:27:33 spdk_dd -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:25.729 01:27:33 spdk_dd -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.729 01:27:33 spdk_dd -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.729 01:27:33 spdk_dd -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.729 01:27:33 spdk_dd -- paths/export.sh@5 -- # export PATH 00:07:25.729 01:27:33 spdk_dd -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:25.729 01:27:33 spdk_dd -- dd/dd.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:25.989 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:25.989 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:25.989 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:07:25.989 01:27:34 spdk_dd -- dd/dd.sh@11 -- # nvmes=($(nvme_in_userspace)) 00:07:25.989 01:27:34 spdk_dd -- dd/dd.sh@11 -- # nvme_in_userspace 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@312 -- # local bdf bdfs 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@313 -- # local nvmes 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@315 -- # [[ -n '' ]] 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@298 -- # local bdf= 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@233 -- # local class 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@234 -- # local subclass 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@235 -- # local progif 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@236 -- # printf %02x 1 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@236 -- # class=01 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@237 -- # printf %02x 8 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@237 -- # subclass=08 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@238 -- # printf %02x 2 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@238 -- # progif=02 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@240 -- # hash lspci 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@242 -- # lspci -mm -n -D 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@243 -- # grep -i -- -p02 00:07:25.989 01:27:34 spdk_dd -- scripts/common.sh@245 -- # tr -d '"' 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@18 -- # local i 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@25 -- # [[ -z '' ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@27 -- # return 0 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@323 -- # uname -s 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@328 -- # (( 2 )) 00:07:25.990 01:27:34 spdk_dd -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:07:25.990 01:27:34 spdk_dd -- dd/dd.sh@13 -- # check_liburing 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@139 -- # local lib 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@140 -- # local -g liburing_in_use=0 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@137 -- # objdump -p /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@137 -- # grep NEEDED 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libasan.so.8 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_malloc.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_null.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_nvme.so.7.1 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_passthru.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_lvol.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_raid.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_error.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_gpt.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_split.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_delay.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_zone_block.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs_bdev.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blobfs.so.10.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob_bdev.so.11.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_lvol.so.10.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_blob.so.11.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_nvme.so.15.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_provider.so.7.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rdma_utils.so.1.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_aio.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_ftl.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ftl.so.9.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_virtio.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_virtio.so.7.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfio_user.so.5.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_iscsi.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev_uring.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_error.so.2.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_ioat.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_ioat.so.7.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_dsa.so.5.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel_iaa.so.3.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_idxd.so.12.1 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dynamic.so.4.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_env_dpdk.so.15.1 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_dpdk_governor.so.4.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scheduler_gscheduler.so.4.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_posix.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock_uring.so.5.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_device.so.3.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_scsi.so.9.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vfu_tgt.so.3.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fuse_dispatcher.so.1.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_file.so.2.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring_linux.so.1.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev_aio.so.1.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_fsdev.so.2.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event.so.14.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_bdev.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_bdev.so.17.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_notify.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_accel.so.6.0 == liburing.so.* ]] 00:07:25.990 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_accel.so.16.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_dma.so.5.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_vmd.so.6.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_vmd.so.6.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_sock.so.5.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_sock.so.10.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_iobuf.so.3.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_event_keyring.so.1.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_init.so.6.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_thread.so.11.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_trace.so.11.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_keyring.so.2.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_rpc.so.6.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_jsonrpc.so.6.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_json.so.6.0 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_util.so.10.1 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ libspdk_log.so.7.1 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_bus_pci.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_cryptodev.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_dmadev.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_eal.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ethdev.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_hash.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_kvargs.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_log.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mbuf.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_mempool_ring.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_net.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_pci.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_power.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_rcu.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_ring.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_telemetry.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ librte_vhost.so.24 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@142 -- # read -r _ lib _ 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@143 -- # [[ liburing.so.2 == liburing.so.* ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@144 -- # printf '* spdk_dd linked to liburing\n' 00:07:25.991 * spdk_dd linked to liburing 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@146 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:07:25.991 01:27:34 spdk_dd -- dd/common.sh@147 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@23 -- # CONFIG_CET=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:07:25.991 01:27:34 spdk_dd -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@76 -- # CONFIG_FC=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:07:25.992 01:27:34 spdk_dd -- common/build_config.sh@90 -- # CONFIG_URING=y 00:07:25.992 01:27:34 spdk_dd -- dd/common.sh@149 -- # [[ y != y ]] 00:07:25.992 01:27:34 spdk_dd -- dd/common.sh@152 -- # export liburing_in_use=1 00:07:25.992 01:27:34 spdk_dd -- dd/common.sh@152 -- # liburing_in_use=1 00:07:25.992 01:27:34 spdk_dd -- dd/common.sh@153 -- # return 0 00:07:25.992 01:27:34 spdk_dd -- dd/dd.sh@15 -- # (( liburing_in_use == 0 && SPDK_TEST_URING == 1 )) 00:07:25.992 01:27:34 spdk_dd -- dd/dd.sh@20 -- # run_test spdk_dd_basic_rw /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:25.992 01:27:34 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:25.992 01:27:34 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:25.992 01:27:34 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:07:25.992 ************************************ 00:07:25.992 START TEST spdk_dd_basic_rw 00:07:25.992 ************************************ 00:07:25.992 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/basic_rw.sh 0000:00:10.0 0000:00:11.0 00:07:26.251 * Looking for test storage... 00:07:26.251 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@344 -- # case "$op" in 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@345 -- # : 1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # decimal 1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # decimal 2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@353 -- # local d=2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@355 -- # echo 2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@368 -- # return 0 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.252 --rc genhtml_branch_coverage=1 00:07:26.252 --rc genhtml_function_coverage=1 00:07:26.252 --rc genhtml_legend=1 00:07:26.252 --rc geninfo_all_blocks=1 00:07:26.252 --rc geninfo_unexecuted_blocks=1 00:07:26.252 00:07:26.252 ' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.252 --rc genhtml_branch_coverage=1 00:07:26.252 --rc genhtml_function_coverage=1 00:07:26.252 --rc genhtml_legend=1 00:07:26.252 --rc geninfo_all_blocks=1 00:07:26.252 --rc geninfo_unexecuted_blocks=1 00:07:26.252 00:07:26.252 ' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.252 --rc genhtml_branch_coverage=1 00:07:26.252 --rc genhtml_function_coverage=1 00:07:26.252 --rc genhtml_legend=1 00:07:26.252 --rc geninfo_all_blocks=1 00:07:26.252 --rc geninfo_unexecuted_blocks=1 00:07:26.252 00:07:26.252 ' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.252 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.252 --rc genhtml_branch_coverage=1 00:07:26.252 --rc genhtml_function_coverage=1 00:07:26.252 --rc genhtml_legend=1 00:07:26.252 --rc geninfo_all_blocks=1 00:07:26.252 --rc geninfo_unexecuted_blocks=1 00:07:26.252 00:07:26.252 ' 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@5 -- # export PATH 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@80 -- # trap cleanup EXIT 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@82 -- # nvmes=("$@") 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0=Nvme0 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # nvme0_pci=0000:00:10.0 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@83 -- # bdev0=Nvme0n1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@85 -- # declare -A method_bdev_nvme_attach_controller_0 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@91 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@92 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # get_native_nvme_bs 0000:00:10.0 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@124 -- # local pci=0000:00:10.0 lbaf id 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # mapfile -t id 00:07:26.252 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:pcie traddr:0000:00:10.0' 00:07:26.514 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@129 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ Current LBA Format: *LBA Format #([0-9]+) ]] 00:07:26.514 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@130 -- # lbaf=04 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@131 -- # [[ ===================================================== NVMe Controller at 0000:00:10.0 [1b36:0010] ===================================================== Controller Capabilities/Features ================================ Vendor ID: 1b36 Subsystem Vendor ID: 1af4 Serial Number: 12340 Model Number: QEMU NVMe Ctrl Firmware Version: 8.0.0 Recommended Arb Burst: 6 IEEE OUI Identifier: 00 54 52 Multi-path I/O May have multiple subsystem ports: No May have multiple controllers: No Associated with SR-IOV VF: No Max Data Transfer Size: 524288 Max Number of Namespaces: 256 Max Number of I/O Queues: 64 NVMe Specification Version (VS): 1.4 NVMe Specification Version (Identify): 1.4 Maximum Queue Entries: 2048 Contiguous Queues Required: Yes Arbitration Mechanisms Supported Weighted Round Robin: Not Supported Vendor Specific: Not Supported Reset Timeout: 7500 ms Doorbell Stride: 4 bytes NVM Subsystem Reset: Not Supported Command Sets Supported NVM Command Set: Supported Boot Partition: Not Supported Memory Page Size Minimum: 4096 bytes Memory Page Size Maximum: 65536 bytes Persistent Memory Region: Not Supported Optional Asynchronous Events Supported Namespace Attribute Notices: Supported Firmware Activation Notices: Not Supported ANA Change Notices: Not Supported PLE Aggregate Log Change Notices: Not Supported LBA Status Info Alert Notices: Not Supported EGE Aggregate Log Change Notices: Not Supported Normal NVM Subsystem Shutdown event: Not Supported Zone Descriptor Change Notices: Not Supported Discovery Log Change Notices: Not Supported Controller Attributes 128-bit Host Identifier: Not Supported Non-Operational Permissive Mode: Not Supported NVM Sets: Not Supported Read Recovery Levels: Not Supported Endurance Groups: Not Supported Predictable Latency Mode: Not Supported Traffic Based Keep ALive: Not Supported Namespace Granularity: Not Supported SQ Associations: Not Supported UUID List: Not Supported Multi-Domain Subsystem: Not Supported Fixed Capacity Management: Not Supported Variable Capacity Management: Not Supported Delete Endurance Group: Not Supported Delete NVM Set: Not Supported Extended LBA Formats Supported: Supported Flexible Data Placement Supported: Not Supported Controller Memory Buffer Support ================================ Supported: No Persistent Memory Region Support ================================ Supported: No Admin Command Set Attributes ============================ Security Send/Receive: Not Supported Format NVM: Supported Firmware Activate/Download: Not Supported Namespace Management: Supported Device Self-Test: Not Supported Directives: Supported NVMe-MI: Not Supported Virtualization Management: Not Supported Doorbell Buffer Config: Supported Get LBA Status Capability: Not Supported Command & Feature Lockdown Capability: Not Supported Abort Command Limit: 4 Async Event Request Limit: 4 Number of Firmware Slots: N/A Firmware Slot 1 Read-Only: N/A Firmware Activation Without Reset: N/A Multiple Update Detection Support: N/A Firmware Update Granularity: No Information Provided Per-Namespace SMART Log: Yes Asymmetric Namespace Access Log Page: Not Supported Subsystem NQN: nqn.2019-08.org.qemu:12340 Command Effects Log Page: Supported Get Log Page Extended Data: Supported Telemetry Log Pages: Not Supported Persistent Event Log Pages: Not Supported Supported Log Pages Log Page: May Support Commands Supported & Effects Log Page: Not Supported Feature Identifiers & Effects Log Page:May Support NVMe-MI Commands & Effects Log Page: May Support Data Area 4 for Telemetry Log: Not Supported Error Log Page Entries Supported: 1 Keep Alive: Not Supported NVM Command Set Attributes ========================== Submission Queue Entry Size Max: 64 Min: 64 Completion Queue Entry Size Max: 16 Min: 16 Number of Namespaces: 256 Compare Command: Supported Write Uncorrectable Command: Not Supported Dataset Management Command: Supported Write Zeroes Command: Supported Set Features Save Field: Supported Reservations: Not Supported Timestamp: Supported Copy: Supported Volatile Write Cache: Present Atomic Write Unit (Normal): 1 Atomic Write Unit (PFail): 1 Atomic Compare & Write Unit: 1 Fused Compare & Write: Not Supported Scatter-Gather List SGL Command Set: Supported SGL Keyed: Not Supported SGL Bit Bucket Descriptor: Not Supported SGL Metadata Pointer: Not Supported Oversized SGL: Not Supported SGL Metadata Address: Not Supported SGL Offset: Not Supported Transport SGL Data Block: Not Supported Replay Protected Memory Block: Not Supported Firmware Slot Information ========================= Active slot: 1 Slot 1 Firmware Revision: 1.0 Commands Supported and Effects ============================== Admin Commands -------------- Delete I/O Submission Queue (00h): Supported Create I/O Submission Queue (01h): Supported Get Log Page (02h): Supported Delete I/O Completion Queue (04h): Supported Create I/O Completion Queue (05h): Supported Identify (06h): Supported Abort (08h): Supported Set Features (09h): Supported Get Features (0Ah): Supported Asynchronous Event Request (0Ch): Supported Namespace Attachment (15h): Supported NS-Inventory-Change Directive Send (19h): Supported Directive Receive (1Ah): Supported Virtualization Management (1Ch): Supported Doorbell Buffer Config (7Ch): Supported Format NVM (80h): Supported LBA-Change I/O Commands ------------ Flush (00h): Supported LBA-Change Write (01h): Supported LBA-Change Read (02h): Supported Compare (05h): Supported Write Zeroes (08h): Supported LBA-Change Dataset Management (09h): Supported LBA-Change Unknown (0Ch): Supported Unknown (12h): Supported Copy (19h): Supported LBA-Change Unknown (1Dh): Supported LBA-Change Error Log ========= Arbitration =========== Arbitration Burst: no limit Power Management ================ Number of Power States: 1 Current Power State: Power State #0 Power State #0: Max Power: 25.00 W Non-Operational State: Operational Entry Latency: 16 microseconds Exit Latency: 4 microseconds Relative Read Throughput: 0 Relative Read Latency: 0 Relative Write Throughput: 0 Relative Write Latency: 0 Idle Power: Not Reported Active Power: Not Reported Non-Operational Permissive Mode: Not Supported Health Information ================== Critical Warnings: Available Spare Space: OK Temperature: OK Device Reliability: OK Read Only: No Volatile Memory Backup: OK Current Temperature: 323 Kelvin (50 Celsius) Temperature Threshold: 343 Kelvin (70 Celsius) Available Spare: 0% Available Spare Threshold: 0% Life Percentage Used: 0% Data Units Read: 22 Data Units Written: 3 Host Read Commands: 496 Host Write Commands: 2 Controller Busy Time: 0 minutes Power Cycles: 0 Power On Hours: 0 hours Unsafe Shutdowns: 0 Unrecoverable Media Errors: 0 Lifetime Error Log Entries: 0 Warning Temperature Time: 0 minutes Critical Temperature Time: 0 minutes Number of Queues ================ Number of I/O Submission Queues: 64 Number of I/O Completion Queues: 64 ZNS Specific Controller Data ============================ Zone Append Size Limit: 0 Active Namespaces ================= Namespace ID:1 Error Recovery Timeout: Unlimited Command Set Identifier: NVM (00h) Deallocate: Supported Deallocated/Unwritten Error: Supported Deallocated Read Value: All 0x00 Deallocate in Write Zeroes: Not Supported Deallocated Guard Field: 0xFFFF Flush: Supported Reservation: Not Supported Namespace Sharing Capabilities: Private Size (in LBAs): 1310720 (5GiB) Capacity (in LBAs): 1310720 (5GiB) Utilization (in LBAs): 1310720 (5GiB) Thin Provisioning: Not Supported Per-NS Atomic Units: No Maximum Single Source Range Length: 128 Maximum Copy Length: 128 Maximum Source Range Count: 128 NGUID/EUI64 Never Reused: No Namespace Write Protected: No Number of LBA Formats: 8 Current LBA Format: LBA Format #04 LBA Format #00: Data Size: 512 Metadata Size: 0 LBA Format #01: Data Size: 512 Metadata Size: 8 LBA Format #02: Data Size: 512 Metadata Size: 16 LBA Format #03: Data Size: 512 Metadata Size: 64 LBA Format #04: Data Size: 4096 Metadata Size: 0 LBA Format #05: Data Size: 4096 Metadata Size: 8 LBA Format #06: Data Size: 4096 Metadata Size: 16 LBA Format #07: Data Size: 4096 Metadata Size: 64 NVM Specific Namespace Data =========================== Logical Block Storage Tag Mask: 0 Protection Information Capabilities: 16b Guard Protection Information Storage Tag Support: No 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 Storage Tag Check Read Support: No Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI =~ LBA Format #04: Data Size: *([0-9]+) ]] 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@132 -- # lbaf=4096 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@134 -- # echo 4096 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@93 -- # native_bs=4096 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # : 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # run_test dd_bs_lt_native_bs NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@96 -- # gen_conf 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:26.515 ************************************ 00:07:26.515 START TEST dd_bs_lt_native_bs 00:07:26.515 ************************************ 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1129 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@652 -- # local es=0 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:07:26.515 01:27:34 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --ob=Nvme0n1 --bs=2048 --json /dev/fd/61 00:07:26.774 { 00:07:26.774 "subsystems": [ 00:07:26.774 { 00:07:26.774 "subsystem": "bdev", 00:07:26.774 "config": [ 00:07:26.774 { 00:07:26.774 "params": { 00:07:26.774 "trtype": "pcie", 00:07:26.774 "traddr": "0000:00:10.0", 00:07:26.774 "name": "Nvme0" 00:07:26.774 }, 00:07:26.774 "method": "bdev_nvme_attach_controller" 00:07:26.774 }, 00:07:26.774 { 00:07:26.774 "method": "bdev_wait_for_examine" 00:07:26.774 } 00:07:26.774 ] 00:07:26.774 } 00:07:26.774 ] 00:07:26.774 } 00:07:26.774 [2024-11-17 01:27:35.058841] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:26.774 [2024-11-17 01:27:35.059024] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60883 ] 00:07:27.034 [2024-11-17 01:27:35.247369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.034 [2024-11-17 01:27:35.373373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.293 [2024-11-17 01:27:35.559897] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:27.293 [2024-11-17 01:27:35.717256] spdk_dd.c:1161:dd_run: *ERROR*: --bs value cannot be less than input (1) neither output (4096) native block size 00:07:27.293 [2024-11-17 01:27:35.717344] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:27.861 [2024-11-17 01:27:36.134016] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@655 -- # es=234 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@664 -- # es=106 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@665 -- # case "$es" in 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@672 -- # es=1 00:07:28.120 ************************************ 00:07:28.120 END TEST dd_bs_lt_native_bs 00:07:28.120 ************************************ 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.120 00:07:28.120 real 0m1.423s 00:07:28.120 user 0m1.158s 00:07:28.120 sys 0m0.215s 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_bs_lt_native_bs -- common/autotest_common.sh@10 -- # set +x 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@103 -- # run_test dd_rw basic_rw 4096 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.120 ************************************ 00:07:28.120 START TEST dd_rw 00:07:28.120 ************************************ 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1129 -- # basic_rw 4096 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@11 -- # local native_bs=4096 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@12 -- # local count size 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@13 -- # local qds bss 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@15 -- # qds=(1 64) 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@17 -- # for bs in {0..2} 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@18 -- # bss+=($((native_bs << bs))) 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:28.120 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.728 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=1 --json /dev/fd/62 00:07:28.728 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:28.728 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:28.728 01:27:36 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:28.728 { 00:07:28.728 "subsystems": [ 00:07:28.728 { 00:07:28.728 "subsystem": "bdev", 00:07:28.728 "config": [ 00:07:28.728 { 00:07:28.728 "params": { 00:07:28.728 "trtype": "pcie", 00:07:28.728 "traddr": "0000:00:10.0", 00:07:28.728 "name": "Nvme0" 00:07:28.728 }, 00:07:28.728 "method": "bdev_nvme_attach_controller" 00:07:28.728 }, 00:07:28.728 { 00:07:28.728 "method": "bdev_wait_for_examine" 00:07:28.728 } 00:07:28.728 ] 00:07:28.728 } 00:07:28.728 ] 00:07:28.728 } 00:07:28.728 [2024-11-17 01:27:37.105147] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:28.728 [2024-11-17 01:27:37.105617] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60926 ] 00:07:28.987 [2024-11-17 01:27:37.287744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.987 [2024-11-17 01:27:37.370185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.246 [2024-11-17 01:27:37.540831] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:29.246  [2024-11-17T01:27:38.643Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:30.184 00:07:30.184 01:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:30.184 01:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=1 --count=15 --json /dev/fd/62 00:07:30.184 01:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:30.184 01:27:38 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:30.184 { 00:07:30.184 "subsystems": [ 00:07:30.184 { 00:07:30.184 "subsystem": "bdev", 00:07:30.184 "config": [ 00:07:30.184 { 00:07:30.184 "params": { 00:07:30.184 "trtype": "pcie", 00:07:30.184 "traddr": "0000:00:10.0", 00:07:30.184 "name": "Nvme0" 00:07:30.184 }, 00:07:30.184 "method": "bdev_nvme_attach_controller" 00:07:30.184 }, 00:07:30.184 { 00:07:30.184 "method": "bdev_wait_for_examine" 00:07:30.184 } 00:07:30.184 ] 00:07:30.184 } 00:07:30.184 ] 00:07:30.184 } 00:07:30.184 [2024-11-17 01:27:38.621714] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:30.184 [2024-11-17 01:27:38.621895] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60957 ] 00:07:30.443 [2024-11-17 01:27:38.783334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.443 [2024-11-17 01:27:38.864120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.703 [2024-11-17 01:27:39.011750] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:30.961  [2024-11-17T01:27:39.988Z] Copying: 60/60 [kB] (average 19 MBps) 00:07:31.529 00:07:31.529 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:31.529 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:31.530 01:27:39 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:31.530 { 00:07:31.530 "subsystems": [ 00:07:31.530 { 00:07:31.530 "subsystem": "bdev", 00:07:31.530 "config": [ 00:07:31.530 { 00:07:31.530 "params": { 00:07:31.530 "trtype": "pcie", 00:07:31.530 "traddr": "0000:00:10.0", 00:07:31.530 "name": "Nvme0" 00:07:31.530 }, 00:07:31.530 "method": "bdev_nvme_attach_controller" 00:07:31.530 }, 00:07:31.530 { 00:07:31.530 "method": "bdev_wait_for_examine" 00:07:31.530 } 00:07:31.530 ] 00:07:31.530 } 00:07:31.530 ] 00:07:31.530 } 00:07:31.530 [2024-11-17 01:27:39.954482] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:31.530 [2024-11-17 01:27:39.954955] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60979 ] 00:07:31.788 [2024-11-17 01:27:40.136356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.788 [2024-11-17 01:27:40.238466] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.047 [2024-11-17 01:27:40.404450] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:32.306  [2024-11-17T01:27:41.701Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:33.242 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=15 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=15 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=61440 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 61440 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:33.242 01:27:41 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.811 01:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=4096 --qd=64 --json /dev/fd/62 00:07:33.811 01:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:33.811 01:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:33.811 01:27:42 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:33.811 { 00:07:33.811 "subsystems": [ 00:07:33.811 { 00:07:33.811 "subsystem": "bdev", 00:07:33.811 "config": [ 00:07:33.811 { 00:07:33.811 "params": { 00:07:33.811 "trtype": "pcie", 00:07:33.811 "traddr": "0000:00:10.0", 00:07:33.811 "name": "Nvme0" 00:07:33.811 }, 00:07:33.811 "method": "bdev_nvme_attach_controller" 00:07:33.811 }, 00:07:33.811 { 00:07:33.811 "method": "bdev_wait_for_examine" 00:07:33.811 } 00:07:33.811 ] 00:07:33.811 } 00:07:33.811 ] 00:07:33.811 } 00:07:33.811 [2024-11-17 01:27:42.125438] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:33.811 [2024-11-17 01:27:42.125614] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61010 ] 00:07:34.072 [2024-11-17 01:27:42.305725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:34.072 [2024-11-17 01:27:42.386464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.331 [2024-11-17 01:27:42.538219] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:34.331  [2024-11-17T01:27:43.727Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:35.268 00:07:35.268 01:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:35.268 01:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=4096 --qd=64 --count=15 --json /dev/fd/62 00:07:35.268 01:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:35.268 01:27:43 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:35.268 { 00:07:35.269 "subsystems": [ 00:07:35.269 { 00:07:35.269 "subsystem": "bdev", 00:07:35.269 "config": [ 00:07:35.269 { 00:07:35.269 "params": { 00:07:35.269 "trtype": "pcie", 00:07:35.269 "traddr": "0000:00:10.0", 00:07:35.269 "name": "Nvme0" 00:07:35.269 }, 00:07:35.269 "method": "bdev_nvme_attach_controller" 00:07:35.269 }, 00:07:35.269 { 00:07:35.269 "method": "bdev_wait_for_examine" 00:07:35.269 } 00:07:35.269 ] 00:07:35.269 } 00:07:35.269 ] 00:07:35.269 } 00:07:35.269 [2024-11-17 01:27:43.515468] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:35.269 [2024-11-17 01:27:43.515931] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61030 ] 00:07:35.269 [2024-11-17 01:27:43.686364] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.528 [2024-11-17 01:27:43.780468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.528 [2024-11-17 01:27:43.936157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:35.787  [2024-11-17T01:27:45.184Z] Copying: 60/60 [kB] (average 58 MBps) 00:07:36.725 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 61440 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=61440 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:36.725 01:27:44 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:36.725 { 00:07:36.725 "subsystems": [ 00:07:36.725 { 00:07:36.725 "subsystem": "bdev", 00:07:36.725 "config": [ 00:07:36.725 { 00:07:36.725 "params": { 00:07:36.725 "trtype": "pcie", 00:07:36.725 "traddr": "0000:00:10.0", 00:07:36.725 "name": "Nvme0" 00:07:36.725 }, 00:07:36.725 "method": "bdev_nvme_attach_controller" 00:07:36.725 }, 00:07:36.725 { 00:07:36.725 "method": "bdev_wait_for_examine" 00:07:36.725 } 00:07:36.725 ] 00:07:36.725 } 00:07:36.725 ] 00:07:36.725 } 00:07:36.725 [2024-11-17 01:27:45.067068] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:36.725 [2024-11-17 01:27:45.067785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61063 ] 00:07:36.984 [2024-11-17 01:27:45.246818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.984 [2024-11-17 01:27:45.337155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.243 [2024-11-17 01:27:45.493136] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:37.243  [2024-11-17T01:27:46.639Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:38.180 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:38.180 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.749 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=1 --json /dev/fd/62 00:07:38.749 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:38.749 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:38.749 01:27:46 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:38.749 { 00:07:38.749 "subsystems": [ 00:07:38.749 { 00:07:38.749 "subsystem": "bdev", 00:07:38.749 "config": [ 00:07:38.749 { 00:07:38.749 "params": { 00:07:38.749 "trtype": "pcie", 00:07:38.749 "traddr": "0000:00:10.0", 00:07:38.749 "name": "Nvme0" 00:07:38.749 }, 00:07:38.749 "method": "bdev_nvme_attach_controller" 00:07:38.749 }, 00:07:38.749 { 00:07:38.749 "method": "bdev_wait_for_examine" 00:07:38.749 } 00:07:38.749 ] 00:07:38.749 } 00:07:38.749 ] 00:07:38.749 } 00:07:38.749 [2024-11-17 01:27:47.028020] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:38.749 [2024-11-17 01:27:47.028204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61093 ] 00:07:38.749 [2024-11-17 01:27:47.206267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:39.008 [2024-11-17 01:27:47.290805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:39.008 [2024-11-17 01:27:47.447109] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:39.267  [2024-11-17T01:27:48.663Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:40.204 00:07:40.204 01:27:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:40.204 01:27:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=1 --count=7 --json /dev/fd/62 00:07:40.204 01:27:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:40.204 01:27:48 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:40.204 { 00:07:40.204 "subsystems": [ 00:07:40.204 { 00:07:40.204 "subsystem": "bdev", 00:07:40.204 "config": [ 00:07:40.204 { 00:07:40.204 "params": { 00:07:40.204 "trtype": "pcie", 00:07:40.204 "traddr": "0000:00:10.0", 00:07:40.204 "name": "Nvme0" 00:07:40.204 }, 00:07:40.204 "method": "bdev_nvme_attach_controller" 00:07:40.204 }, 00:07:40.204 { 00:07:40.204 "method": "bdev_wait_for_examine" 00:07:40.204 } 00:07:40.204 ] 00:07:40.204 } 00:07:40.204 ] 00:07:40.204 } 00:07:40.204 [2024-11-17 01:27:48.583746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:40.204 [2024-11-17 01:27:48.583951] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61114 ] 00:07:40.464 [2024-11-17 01:27:48.764340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.464 [2024-11-17 01:27:48.846376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.723 [2024-11-17 01:27:49.005709] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:40.723  [2024-11-17T01:27:50.118Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:41.659 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:41.659 01:27:49 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:41.659 { 00:07:41.659 "subsystems": [ 00:07:41.659 { 00:07:41.659 "subsystem": "bdev", 00:07:41.659 "config": [ 00:07:41.659 { 00:07:41.659 "params": { 00:07:41.659 "trtype": "pcie", 00:07:41.659 "traddr": "0000:00:10.0", 00:07:41.659 "name": "Nvme0" 00:07:41.659 }, 00:07:41.659 "method": "bdev_nvme_attach_controller" 00:07:41.659 }, 00:07:41.659 { 00:07:41.659 "method": "bdev_wait_for_examine" 00:07:41.659 } 00:07:41.659 ] 00:07:41.659 } 00:07:41.659 ] 00:07:41.659 } 00:07:41.659 [2024-11-17 01:27:49.926390] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:41.659 [2024-11-17 01:27:49.926567] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61142 ] 00:07:41.659 [2024-11-17 01:27:50.105903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.918 [2024-11-17 01:27:50.192071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.918 [2024-11-17 01:27:50.339666] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:42.177  [2024-11-17T01:27:51.571Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:43.112 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=7 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=7 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=57344 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 57344 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:43.112 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.679 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=8192 --qd=64 --json /dev/fd/62 00:07:43.679 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:43.679 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:43.679 01:27:51 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:43.679 { 00:07:43.679 "subsystems": [ 00:07:43.679 { 00:07:43.679 "subsystem": "bdev", 00:07:43.679 "config": [ 00:07:43.679 { 00:07:43.679 "params": { 00:07:43.679 "trtype": "pcie", 00:07:43.679 "traddr": "0000:00:10.0", 00:07:43.679 "name": "Nvme0" 00:07:43.679 }, 00:07:43.679 "method": "bdev_nvme_attach_controller" 00:07:43.679 }, 00:07:43.679 { 00:07:43.679 "method": "bdev_wait_for_examine" 00:07:43.679 } 00:07:43.679 ] 00:07:43.679 } 00:07:43.679 ] 00:07:43.679 } 00:07:43.679 [2024-11-17 01:27:51.971973] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:43.679 [2024-11-17 01:27:51.972389] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61173 ] 00:07:43.938 [2024-11-17 01:27:52.154294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.938 [2024-11-17 01:27:52.246669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.197 [2024-11-17 01:27:52.398930] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:44.197  [2024-11-17T01:27:53.593Z] Copying: 56/56 [kB] (average 54 MBps) 00:07:45.134 00:07:45.134 01:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=8192 --qd=64 --count=7 --json /dev/fd/62 00:07:45.134 01:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:45.134 01:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:45.134 01:27:53 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:45.134 { 00:07:45.134 "subsystems": [ 00:07:45.134 { 00:07:45.134 "subsystem": "bdev", 00:07:45.134 "config": [ 00:07:45.134 { 00:07:45.134 "params": { 00:07:45.134 "trtype": "pcie", 00:07:45.134 "traddr": "0000:00:10.0", 00:07:45.134 "name": "Nvme0" 00:07:45.134 }, 00:07:45.134 "method": "bdev_nvme_attach_controller" 00:07:45.134 }, 00:07:45.134 { 00:07:45.134 "method": "bdev_wait_for_examine" 00:07:45.134 } 00:07:45.134 ] 00:07:45.134 } 00:07:45.134 ] 00:07:45.134 } 00:07:45.134 [2024-11-17 01:27:53.364751] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:45.134 [2024-11-17 01:27:53.364953] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61197 ] 00:07:45.134 [2024-11-17 01:27:53.542976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.394 [2024-11-17 01:27:53.644193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.394 [2024-11-17 01:27:53.808013] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:45.653  [2024-11-17T01:27:55.058Z] Copying: 56/56 [kB] (average 27 MBps) 00:07:46.599 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 57344 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=57344 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:46.599 01:27:54 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:46.599 { 00:07:46.599 "subsystems": [ 00:07:46.599 { 00:07:46.599 "subsystem": "bdev", 00:07:46.599 "config": [ 00:07:46.599 { 00:07:46.599 "params": { 00:07:46.599 "trtype": "pcie", 00:07:46.599 "traddr": "0000:00:10.0", 00:07:46.599 "name": "Nvme0" 00:07:46.599 }, 00:07:46.599 "method": "bdev_nvme_attach_controller" 00:07:46.599 }, 00:07:46.599 { 00:07:46.599 "method": "bdev_wait_for_examine" 00:07:46.599 } 00:07:46.599 ] 00:07:46.599 } 00:07:46.599 ] 00:07:46.599 } 00:07:46.599 [2024-11-17 01:27:54.948896] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:46.599 [2024-11-17 01:27:54.949321] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61220 ] 00:07:46.873 [2024-11-17 01:27:55.122031] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.873 [2024-11-17 01:27:55.203293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.131 [2024-11-17 01:27:55.363581] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:47.131  [2024-11-17T01:27:56.527Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:48.068 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@21 -- # for bs in "${bss[@]}" 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:48.068 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=1 --json /dev/fd/62 00:07:48.328 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:48.328 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:48.328 01:27:56 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:48.328 { 00:07:48.328 "subsystems": [ 00:07:48.328 { 00:07:48.328 "subsystem": "bdev", 00:07:48.328 "config": [ 00:07:48.328 { 00:07:48.328 "params": { 00:07:48.328 "trtype": "pcie", 00:07:48.328 "traddr": "0000:00:10.0", 00:07:48.328 "name": "Nvme0" 00:07:48.328 }, 00:07:48.328 "method": "bdev_nvme_attach_controller" 00:07:48.328 }, 00:07:48.328 { 00:07:48.328 "method": "bdev_wait_for_examine" 00:07:48.328 } 00:07:48.328 ] 00:07:48.328 } 00:07:48.328 ] 00:07:48.328 } 00:07:48.328 [2024-11-17 01:27:56.734131] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:48.328 [2024-11-17 01:27:56.734291] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61251 ] 00:07:48.587 [2024-11-17 01:27:56.887878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.587 [2024-11-17 01:27:56.978031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.846 [2024-11-17 01:27:57.120899] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:48.846  [2024-11-17T01:27:58.242Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:49.784 00:07:49.784 01:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=1 --count=3 --json /dev/fd/62 00:07:49.784 01:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:49.784 01:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:49.784 01:27:58 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:49.784 { 00:07:49.784 "subsystems": [ 00:07:49.784 { 00:07:49.784 "subsystem": "bdev", 00:07:49.784 "config": [ 00:07:49.784 { 00:07:49.784 "params": { 00:07:49.784 "trtype": "pcie", 00:07:49.784 "traddr": "0000:00:10.0", 00:07:49.784 "name": "Nvme0" 00:07:49.784 }, 00:07:49.784 "method": "bdev_nvme_attach_controller" 00:07:49.784 }, 00:07:49.784 { 00:07:49.784 "method": "bdev_wait_for_examine" 00:07:49.784 } 00:07:49.784 ] 00:07:49.784 } 00:07:49.784 ] 00:07:49.784 } 00:07:49.784 [2024-11-17 01:27:58.210242] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:49.784 [2024-11-17 01:27:58.210375] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61277 ] 00:07:50.043 [2024-11-17 01:27:58.374020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.043 [2024-11-17 01:27:58.456447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.302 [2024-11-17 01:27:58.624816] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:50.562  [2024-11-17T01:27:59.589Z] Copying: 48/48 [kB] (average 23 MBps) 00:07:51.130 00:07:51.130 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:51.130 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:51.130 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:51.130 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:51.131 01:27:59 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:51.131 { 00:07:51.131 "subsystems": [ 00:07:51.131 { 00:07:51.131 "subsystem": "bdev", 00:07:51.131 "config": [ 00:07:51.131 { 00:07:51.131 "params": { 00:07:51.131 "trtype": "pcie", 00:07:51.131 "traddr": "0000:00:10.0", 00:07:51.131 "name": "Nvme0" 00:07:51.131 }, 00:07:51.131 "method": "bdev_nvme_attach_controller" 00:07:51.131 }, 00:07:51.131 { 00:07:51.131 "method": "bdev_wait_for_examine" 00:07:51.131 } 00:07:51.131 ] 00:07:51.131 } 00:07:51.131 ] 00:07:51.131 } 00:07:51.131 [2024-11-17 01:27:59.544820] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:51.131 [2024-11-17 01:27:59.545204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61299 ] 00:07:51.390 [2024-11-17 01:27:59.719235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.390 [2024-11-17 01:27:59.800247] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.648 [2024-11-17 01:27:59.947191] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:51.906  [2024-11-17T01:28:01.301Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:52.842 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@22 -- # for qd in "${qds[@]}" 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@23 -- # count=3 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@24 -- # count=3 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@25 -- # size=49152 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@27 -- # gen_bytes 49152 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@98 -- # xtrace_disable 00:07:52.842 01:28:00 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.101 01:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --bs=16384 --qd=64 --json /dev/fd/62 00:07:53.101 01:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@30 -- # gen_conf 00:07:53.101 01:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:53.101 01:28:01 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:53.101 { 00:07:53.101 "subsystems": [ 00:07:53.101 { 00:07:53.101 "subsystem": "bdev", 00:07:53.101 "config": [ 00:07:53.101 { 00:07:53.101 "params": { 00:07:53.101 "trtype": "pcie", 00:07:53.101 "traddr": "0000:00:10.0", 00:07:53.101 "name": "Nvme0" 00:07:53.101 }, 00:07:53.101 "method": "bdev_nvme_attach_controller" 00:07:53.101 }, 00:07:53.101 { 00:07:53.101 "method": "bdev_wait_for_examine" 00:07:53.101 } 00:07:53.101 ] 00:07:53.101 } 00:07:53.101 ] 00:07:53.101 } 00:07:53.101 [2024-11-17 01:28:01.516455] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:53.101 [2024-11-17 01:28:01.516649] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61330 ] 00:07:53.360 [2024-11-17 01:28:01.696571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.360 [2024-11-17 01:28:01.791041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.618 [2024-11-17 01:28:01.947937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:53.878  [2024-11-17T01:28:02.905Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:54.446 00:07:54.446 01:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=16384 --qd=64 --count=3 --json /dev/fd/62 00:07:54.446 01:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@37 -- # gen_conf 00:07:54.446 01:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:54.446 01:28:02 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:54.446 { 00:07:54.446 "subsystems": [ 00:07:54.446 { 00:07:54.446 "subsystem": "bdev", 00:07:54.446 "config": [ 00:07:54.446 { 00:07:54.446 "params": { 00:07:54.446 "trtype": "pcie", 00:07:54.446 "traddr": "0000:00:10.0", 00:07:54.446 "name": "Nvme0" 00:07:54.446 }, 00:07:54.446 "method": "bdev_nvme_attach_controller" 00:07:54.446 }, 00:07:54.446 { 00:07:54.446 "method": "bdev_wait_for_examine" 00:07:54.446 } 00:07:54.446 ] 00:07:54.446 } 00:07:54.446 ] 00:07:54.446 } 00:07:54.705 [2024-11-17 01:28:02.931641] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:54.705 [2024-11-17 01:28:02.931862] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61355 ] 00:07:54.705 [2024-11-17 01:28:03.108535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.964 [2024-11-17 01:28:03.193632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.964 [2024-11-17 01:28:03.345741] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:55.222  [2024-11-17T01:28:04.618Z] Copying: 48/48 [kB] (average 46 MBps) 00:07:56.159 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@44 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/basic_rw.sh@45 -- # clear_nvme Nvme0n1 '' 49152 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@11 -- # local nvme_ref= 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@12 -- # local size=49152 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@14 -- # local bs=1048576 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@15 -- # local count=1 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@18 -- # gen_conf 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- dd/common.sh@31 -- # xtrace_disable 00:07:56.159 01:28:04 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:56.159 { 00:07:56.159 "subsystems": [ 00:07:56.159 { 00:07:56.159 "subsystem": "bdev", 00:07:56.159 "config": [ 00:07:56.159 { 00:07:56.159 "params": { 00:07:56.159 "trtype": "pcie", 00:07:56.159 "traddr": "0000:00:10.0", 00:07:56.159 "name": "Nvme0" 00:07:56.159 }, 00:07:56.159 "method": "bdev_nvme_attach_controller" 00:07:56.159 }, 00:07:56.159 { 00:07:56.159 "method": "bdev_wait_for_examine" 00:07:56.159 } 00:07:56.159 ] 00:07:56.159 } 00:07:56.159 ] 00:07:56.159 } 00:07:56.159 [2024-11-17 01:28:04.478512] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:56.159 [2024-11-17 01:28:04.478683] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61383 ] 00:07:56.418 [2024-11-17 01:28:04.655922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.418 [2024-11-17 01:28:04.738706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:56.678 [2024-11-17 01:28:04.899773] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:56.678  [2024-11-17T01:28:06.073Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:07:57.614 00:07:57.614 ************************************ 00:07:57.614 END TEST dd_rw 00:07:57.614 ************************************ 00:07:57.614 00:07:57.614 real 0m29.328s 00:07:57.614 user 0m24.490s 00:07:57.614 sys 0m13.583s 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@104 -- # run_test dd_rw_offset basic_offset 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 ************************************ 00:07:57.614 START TEST dd_rw_offset 00:07:57.614 ************************************ 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1129 -- # basic_offset 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@52 -- # local count seek skip data data_check 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@54 -- # gen_bytes 4096 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@98 -- # xtrace_disable 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:57.614 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@55 -- # (( count = seek = skip = 1 )) 00:07:57.615 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@56 -- # data=o7n10y2fhrfucsi9w15yd3t1yunr0jdmy3ab3xcqju5e625swtxbflgbbsdl1phy8nu67dwk4ok7l0vw68qt9h4qmlgmbkgbq22pqe5gmeebjccpf04denn9nm2ybgefn1swkw85mbksopf2okw20bqjki5pi3u5gqkjlxe053iu9dbju175yyzgm1o3jrae0gk7dy28xd6whwiuvw79pzv4xef9r717mxma56dho8mcoac1sil3cle6nw6j0hsebbhugtig66e3ukuw4vf86vnc36ecxr2tj8yfpoz9jaszkmjvkupkc5t4w4iwqh75b12l9rwxjuqvocxokct8umsz7udfud4oaablqby5igecad0ynh4fyyerv10i54c17y02l2ic1cvafn4xj0iu892uwtinq79yz9py2ayidbewi78seq7eqii0vkfhxfqktlcqvp06u8jja38kedm3ysm9z2bdpvvm7v724quw76bubgopmpx51wffkgvf3ox7pvfvw9vagh8otfihubsqxdlndil4ud6p9kih1qb8w77y909lumxr69dgu357pjnokfw8f8m5y5xk35dfpzrj1xfo5xw4681lldls94bnm2mkgy199d4csnmdea9f6mtrx87buvljiwbwq3nnzwvldtk8hwfyavga0lgtykpqybnwyify8mhs37134h1reiduozhkuyxj8xqxqtq9amjrhnjr6cdm2k24gjuy1ug86utgs7tknqmnzw10612ib8ntnpyz9yxft2qtx2ozk7u3tlk5antd6ggc9zwnwkivjorxoenh0scwvenmivrql0x56yje8p2ev4gpl4m95mh4gcyk6lev19q3zsetrm1d1hvm37ydl5ac7m5vpnj4yjbufchvqdu08bsru5r6gl1trf1rmyfxf4q042e7wumnav4vb6zkvgq72rc94i382p78wkbc1k4pxvervrsomc8imi8len2eahmjc254zj7e0vaih0suigpu10horjwj4tfnzn8n6r2scmp2ziz9fe87w0wljb11frtto2hnzm46acoqf9silwkrzyrdxjo9bdtg73bh9edoll69rbv2jhrzu6k6u3lo2urcdu3uvdm6ujtox4qemkd2m9j41ugk5gk2beqzoeyw0jz3i0lz052nj71q6yj0ttudszoln0jr1d91manxxeebo0x3ii36zf2i4nowydkyabzgpoxc7o0xuvtjmmqpgx9yd0pm6b6tdb33s2s8do216nw4gf4m6mtmiixoneoykmu8aszg1uart4amuu9iszl2fl675fln8sduyi8q6mkfcgowybn6pytxlpkhtxaypbij7to0y9dz41qsqxhg66usgf73hnbxovs5rl51uyq6vbbwtbxtd3s4cwbo9e5ynl6twqtzj0319ei5xb9kqzd61aybd6s2asu4n4yoj0zxs3nt2rd1a7uank3ypsftc201yot7rek9yppv5ilyonskyi0ycty63e88siizonvmxvrhvtc7mbwxmjmms319td1rt07steqm75aukij268gg8ehoflpqofoubpojsfj4oyhibf6p7q79tphcqu2rchskx06ae0aoojwzd99bhvbm11cqmmve1ujld28tmcuc1c7xdzqf3lwkb3p8x3tn3tu65qbhhge0zjkx2ega20vppp54qa90gpb94mq2lkjy32cw57xul4qgf1zykhsz4pxbub9u6po575olq1sde4qh2xp1dus4izdfaawnl5hzl7rwjxpstyvcfr3fw9c9kl2awpccyi10pdpr64tpeagmc6ngqsg2p0yf2c7tcu8c7vetfezqe19l6eyiyzvm9supbq4ss7w437p3f5dnbelwi9ncvpoizifn8q5obmcfh8rccybcxrl8542i1tkm2din9tsbf7tbtgnars4cxfttatamfxbe4eemwiczkxv9qlwilyj97juti4bzvgbyc5tb8qmbeur3bpafmtvuw6dcrmgvcynjw4hocs3ddytpzunqzpqyd1he9qshm6wg1uh77b723ue2lpz6h5uv2ok78l8hcua38mwul3cyob0u3sbka523nzeepiddt1pu9vjiu9dktb312cvcxg7m99a60r97tj6b0age7bzj78lu2wy52wj3d8cf1bx4t5ssvzjfs15zs8vfjml7t1ypfkh76kh8lu1gjq5kmazqbkkuk581iui4exbiixkkxughaft67jcd8efxa3guvq7kwbnenl6muamc1fpnpu96hx2hjrt0eidgqsi4zuscaz81m8qojwghbo6zdc5lwmv2he3bqjvth4aji18tl2v2zowr4vf3kfd2xr4dly7o2rq7wvlns25b7kvm3ieitgrf183mkys3hn3cckfqg8pieoxxdj56ess3nz1zqf2qfyu6wzpq65ky3dqnyjetyoy5y6ymtp630axcmrnfy8k0zhkkspp9jjfkwl83nw26rj5xbsubppqhqz1ysp9hxnhtrf27sscw79qyjjy2f4tae5pnf5fy1zfw2xorxb38tbssdk0khinqdio791cdst8347r6lowhl2izprzdlvl8e22deprflioaujcmzusnjxicr8faj1sgqxtpsz1nr8o9gms5g7vgpmsylx1sm3je4xoz58f28le74xc28e4ditwwcgjjw8z9lgxc9d5m51tlem419y7vwlbn19ulv0ast9d5fj6n81nl8m4fa9o7b1oiw5jde39kckwv6anyk0lfbt32oq3q8ikcfoykd4dwwo6w8e17s4hcuuk4x7w5frzlx50v6wtxw3jlakwopta54rnux7j0tudwpbxnjm730u3qad5ot0td9xv6mbvhnkue0gu8xd139rdpb37wbcd8dp9tkybio9b7agzhhlx9aesmi0xj31bxi9ss8trbrhieil1psrr4kyk8vzb33gfyen8npwsnog1o2kkkl16ivndvqw3tq6jy3ea48zj01nctnz2xf9gf20mje90ww9qwq7f5xa0pn8zcowvwysoyzwjua8iejd4fk7ld8ry4pxw3spj6382k3h8hrjc8gnhbkrcq59w7dgfq42xu9p5jxnmsvuxxqsydeqgbe1fpdak8xa49o4zysdmw74dfns8nae9diezzc8pb2jlkpv87qyzvcb1679ha2pylaitjknkbmiasrq1j80e93ohwx2qab82eu250oqsezlt5xi8lvro8chjuff56h3lbx6q9vkss1exqtszvcu44tmujcvam3bkyrol3okgv2vcpiqa5wd73d2e60tcyiombdj5slu74ddsixihvgptl5vxn5tvyp5kjlvsglyrnuq0kl40kfp4ahlvvnrty1iq10zgsf4lhj3pqwhlbqvttdq83xwn8r2fdsinxtlaafnz5lrf1wyky4c6snqx3prqk9vtl0dexf0lhvi0e6zypv8jpm8l9c16pop44arnlj0sgs1c5bvlbkewgi5h6tq2ggkx9pnv30fl4wpmbbtrywy8tuaj9vtnfg4glzv2znbls6cj9ndn7odkkx4b4o2xpi0c7da8o2fd320krudkva76e0823hdx6juo8fm9sxow81m2fjzbe26yu95vkpw9ifdvbpti0b3k6x4iajj1itpanehzdgcle4xk8p661ih012b8ur6oq6sp25fxe44txlzb1sd7gjgk56x45zxqg435vsmgqfmv3rp4yt0wxmv3ln6v0nqx9peej9qbu8vnb99kbr4ziflk5ahc1ov2uq4r153stn5695hxjcgo2xjtv4iwowekmtdovowfp0je0rtpu6vvj2nfk0nwkhajyqsuv1mnmiv0zbph610shwsjndrsqkcwacgy6h3o198bncwak2dvz63qd3t59u7ohdk1z383tn2a2fx4peedpzh7wqwiopy5dpkbw44zvha5nqpfee72hvk1d9a2zsepu3vvg9sceb54foxh4jjzjv2mqvwqy9aa4ch1ud6x7e2a1dlbbcvd13pbv1dlo2645s5lky20ulvdy4t03w246iocfsxxty1zwitm8uhs4hdte 00:07:57.615 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --seek=1 --json /dev/fd/62 00:07:57.615 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@59 -- # gen_conf 00:07:57.615 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:57.615 01:28:05 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:57.615 { 00:07:57.615 "subsystems": [ 00:07:57.615 { 00:07:57.615 "subsystem": "bdev", 00:07:57.615 "config": [ 00:07:57.615 { 00:07:57.615 "params": { 00:07:57.615 "trtype": "pcie", 00:07:57.615 "traddr": "0000:00:10.0", 00:07:57.615 "name": "Nvme0" 00:07:57.615 }, 00:07:57.615 "method": "bdev_nvme_attach_controller" 00:07:57.615 }, 00:07:57.615 { 00:07:57.615 "method": "bdev_wait_for_examine" 00:07:57.615 } 00:07:57.615 ] 00:07:57.615 } 00:07:57.615 ] 00:07:57.615 } 00:07:57.615 [2024-11-17 01:28:05.946473] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:57.615 [2024-11-17 01:28:05.946625] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61420 ] 00:07:57.874 [2024-11-17 01:28:06.109319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.874 [2024-11-17 01:28:06.197747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.133 [2024-11-17 01:28:06.340937] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:58.133  [2024-11-17T01:28:07.530Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:07:59.071 00:07:59.071 01:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --skip=1 --count=1 --json /dev/fd/62 00:07:59.071 01:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@65 -- # gen_conf 00:07:59.071 01:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/common.sh@31 -- # xtrace_disable 00:07:59.071 01:28:07 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:07:59.071 { 00:07:59.071 "subsystems": [ 00:07:59.071 { 00:07:59.071 "subsystem": "bdev", 00:07:59.071 "config": [ 00:07:59.071 { 00:07:59.071 "params": { 00:07:59.071 "trtype": "pcie", 00:07:59.071 "traddr": "0000:00:10.0", 00:07:59.071 "name": "Nvme0" 00:07:59.071 }, 00:07:59.071 "method": "bdev_nvme_attach_controller" 00:07:59.071 }, 00:07:59.071 { 00:07:59.071 "method": "bdev_wait_for_examine" 00:07:59.071 } 00:07:59.071 ] 00:07:59.071 } 00:07:59.071 ] 00:07:59.071 } 00:07:59.071 [2024-11-17 01:28:07.451811] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:07:59.071 [2024-11-17 01:28:07.451992] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61451 ] 00:07:59.330 [2024-11-17 01:28:07.633337] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.330 [2024-11-17 01:28:07.725551] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.590 [2024-11-17 01:28:07.884252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:07:59.590  [2024-11-17T01:28:09.008Z] Copying: 4096/4096 [B] (average 4000 kBps) 00:08:00.549 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@71 -- # read -rn4096 data_check 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- dd/basic_rw.sh@72 -- # [[ o7n10y2fhrfucsi9w15yd3t1yunr0jdmy3ab3xcqju5e625swtxbflgbbsdl1phy8nu67dwk4ok7l0vw68qt9h4qmlgmbkgbq22pqe5gmeebjccpf04denn9nm2ybgefn1swkw85mbksopf2okw20bqjki5pi3u5gqkjlxe053iu9dbju175yyzgm1o3jrae0gk7dy28xd6whwiuvw79pzv4xef9r717mxma56dho8mcoac1sil3cle6nw6j0hsebbhugtig66e3ukuw4vf86vnc36ecxr2tj8yfpoz9jaszkmjvkupkc5t4w4iwqh75b12l9rwxjuqvocxokct8umsz7udfud4oaablqby5igecad0ynh4fyyerv10i54c17y02l2ic1cvafn4xj0iu892uwtinq79yz9py2ayidbewi78seq7eqii0vkfhxfqktlcqvp06u8jja38kedm3ysm9z2bdpvvm7v724quw76bubgopmpx51wffkgvf3ox7pvfvw9vagh8otfihubsqxdlndil4ud6p9kih1qb8w77y909lumxr69dgu357pjnokfw8f8m5y5xk35dfpzrj1xfo5xw4681lldls94bnm2mkgy199d4csnmdea9f6mtrx87buvljiwbwq3nnzwvldtk8hwfyavga0lgtykpqybnwyify8mhs37134h1reiduozhkuyxj8xqxqtq9amjrhnjr6cdm2k24gjuy1ug86utgs7tknqmnzw10612ib8ntnpyz9yxft2qtx2ozk7u3tlk5antd6ggc9zwnwkivjorxoenh0scwvenmivrql0x56yje8p2ev4gpl4m95mh4gcyk6lev19q3zsetrm1d1hvm37ydl5ac7m5vpnj4yjbufchvqdu08bsru5r6gl1trf1rmyfxf4q042e7wumnav4vb6zkvgq72rc94i382p78wkbc1k4pxvervrsomc8imi8len2eahmjc254zj7e0vaih0suigpu10horjwj4tfnzn8n6r2scmp2ziz9fe87w0wljb11frtto2hnzm46acoqf9silwkrzyrdxjo9bdtg73bh9edoll69rbv2jhrzu6k6u3lo2urcdu3uvdm6ujtox4qemkd2m9j41ugk5gk2beqzoeyw0jz3i0lz052nj71q6yj0ttudszoln0jr1d91manxxeebo0x3ii36zf2i4nowydkyabzgpoxc7o0xuvtjmmqpgx9yd0pm6b6tdb33s2s8do216nw4gf4m6mtmiixoneoykmu8aszg1uart4amuu9iszl2fl675fln8sduyi8q6mkfcgowybn6pytxlpkhtxaypbij7to0y9dz41qsqxhg66usgf73hnbxovs5rl51uyq6vbbwtbxtd3s4cwbo9e5ynl6twqtzj0319ei5xb9kqzd61aybd6s2asu4n4yoj0zxs3nt2rd1a7uank3ypsftc201yot7rek9yppv5ilyonskyi0ycty63e88siizonvmxvrhvtc7mbwxmjmms319td1rt07steqm75aukij268gg8ehoflpqofoubpojsfj4oyhibf6p7q79tphcqu2rchskx06ae0aoojwzd99bhvbm11cqmmve1ujld28tmcuc1c7xdzqf3lwkb3p8x3tn3tu65qbhhge0zjkx2ega20vppp54qa90gpb94mq2lkjy32cw57xul4qgf1zykhsz4pxbub9u6po575olq1sde4qh2xp1dus4izdfaawnl5hzl7rwjxpstyvcfr3fw9c9kl2awpccyi10pdpr64tpeagmc6ngqsg2p0yf2c7tcu8c7vetfezqe19l6eyiyzvm9supbq4ss7w437p3f5dnbelwi9ncvpoizifn8q5obmcfh8rccybcxrl8542i1tkm2din9tsbf7tbtgnars4cxfttatamfxbe4eemwiczkxv9qlwilyj97juti4bzvgbyc5tb8qmbeur3bpafmtvuw6dcrmgvcynjw4hocs3ddytpzunqzpqyd1he9qshm6wg1uh77b723ue2lpz6h5uv2ok78l8hcua38mwul3cyob0u3sbka523nzeepiddt1pu9vjiu9dktb312cvcxg7m99a60r97tj6b0age7bzj78lu2wy52wj3d8cf1bx4t5ssvzjfs15zs8vfjml7t1ypfkh76kh8lu1gjq5kmazqbkkuk581iui4exbiixkkxughaft67jcd8efxa3guvq7kwbnenl6muamc1fpnpu96hx2hjrt0eidgqsi4zuscaz81m8qojwghbo6zdc5lwmv2he3bqjvth4aji18tl2v2zowr4vf3kfd2xr4dly7o2rq7wvlns25b7kvm3ieitgrf183mkys3hn3cckfqg8pieoxxdj56ess3nz1zqf2qfyu6wzpq65ky3dqnyjetyoy5y6ymtp630axcmrnfy8k0zhkkspp9jjfkwl83nw26rj5xbsubppqhqz1ysp9hxnhtrf27sscw79qyjjy2f4tae5pnf5fy1zfw2xorxb38tbssdk0khinqdio791cdst8347r6lowhl2izprzdlvl8e22deprflioaujcmzusnjxicr8faj1sgqxtpsz1nr8o9gms5g7vgpmsylx1sm3je4xoz58f28le74xc28e4ditwwcgjjw8z9lgxc9d5m51tlem419y7vwlbn19ulv0ast9d5fj6n81nl8m4fa9o7b1oiw5jde39kckwv6anyk0lfbt32oq3q8ikcfoykd4dwwo6w8e17s4hcuuk4x7w5frzlx50v6wtxw3jlakwopta54rnux7j0tudwpbxnjm730u3qad5ot0td9xv6mbvhnkue0gu8xd139rdpb37wbcd8dp9tkybio9b7agzhhlx9aesmi0xj31bxi9ss8trbrhieil1psrr4kyk8vzb33gfyen8npwsnog1o2kkkl16ivndvqw3tq6jy3ea48zj01nctnz2xf9gf20mje90ww9qwq7f5xa0pn8zcowvwysoyzwjua8iejd4fk7ld8ry4pxw3spj6382k3h8hrjc8gnhbkrcq59w7dgfq42xu9p5jxnmsvuxxqsydeqgbe1fpdak8xa49o4zysdmw74dfns8nae9diezzc8pb2jlkpv87qyzvcb1679ha2pylaitjknkbmiasrq1j80e93ohwx2qab82eu250oqsezlt5xi8lvro8chjuff56h3lbx6q9vkss1exqtszvcu44tmujcvam3bkyrol3okgv2vcpiqa5wd73d2e60tcyiombdj5slu74ddsixihvgptl5vxn5tvyp5kjlvsglyrnuq0kl40kfp4ahlvvnrty1iq10zgsf4lhj3pqwhlbqvttdq83xwn8r2fdsinxtlaafnz5lrf1wyky4c6snqx3prqk9vtl0dexf0lhvi0e6zypv8jpm8l9c16pop44arnlj0sgs1c5bvlbkewgi5h6tq2ggkx9pnv30fl4wpmbbtrywy8tuaj9vtnfg4glzv2znbls6cj9ndn7odkkx4b4o2xpi0c7da8o2fd320krudkva76e0823hdx6juo8fm9sxow81m2fjzbe26yu95vkpw9ifdvbpti0b3k6x4iajj1itpanehzdgcle4xk8p661ih012b8ur6oq6sp25fxe44txlzb1sd7gjgk56x45zxqg435vsmgqfmv3rp4yt0wxmv3ln6v0nqx9peej9qbu8vnb99kbr4ziflk5ahc1ov2uq4r153stn5695hxjcgo2xjtv4iwowekmtdovowfp0je0rtpu6vvj2nfk0nwkhajyqsuv1mnmiv0zbph610shwsjndrsqkcwacgy6h3o198bncwak2dvz63qd3t59u7ohdk1z383tn2a2fx4peedpzh7wqwiopy5dpkbw44zvha5nqpfee72hvk1d9a2zsepu3vvg9sceb54foxh4jjzjv2mqvwqy9aa4ch1ud6x7e2a1dlbbcvd13pbv1dlo2645s5lky20ulvdy4t03w246iocfsxxty1zwitm8uhs4hdte == \o\7\n\1\0\y\2\f\h\r\f\u\c\s\i\9\w\1\5\y\d\3\t\1\y\u\n\r\0\j\d\m\y\3\a\b\3\x\c\q\j\u\5\e\6\2\5\s\w\t\x\b\f\l\g\b\b\s\d\l\1\p\h\y\8\n\u\6\7\d\w\k\4\o\k\7\l\0\v\w\6\8\q\t\9\h\4\q\m\l\g\m\b\k\g\b\q\2\2\p\q\e\5\g\m\e\e\b\j\c\c\p\f\0\4\d\e\n\n\9\n\m\2\y\b\g\e\f\n\1\s\w\k\w\8\5\m\b\k\s\o\p\f\2\o\k\w\2\0\b\q\j\k\i\5\p\i\3\u\5\g\q\k\j\l\x\e\0\5\3\i\u\9\d\b\j\u\1\7\5\y\y\z\g\m\1\o\3\j\r\a\e\0\g\k\7\d\y\2\8\x\d\6\w\h\w\i\u\v\w\7\9\p\z\v\4\x\e\f\9\r\7\1\7\m\x\m\a\5\6\d\h\o\8\m\c\o\a\c\1\s\i\l\3\c\l\e\6\n\w\6\j\0\h\s\e\b\b\h\u\g\t\i\g\6\6\e\3\u\k\u\w\4\v\f\8\6\v\n\c\3\6\e\c\x\r\2\t\j\8\y\f\p\o\z\9\j\a\s\z\k\m\j\v\k\u\p\k\c\5\t\4\w\4\i\w\q\h\7\5\b\1\2\l\9\r\w\x\j\u\q\v\o\c\x\o\k\c\t\8\u\m\s\z\7\u\d\f\u\d\4\o\a\a\b\l\q\b\y\5\i\g\e\c\a\d\0\y\n\h\4\f\y\y\e\r\v\1\0\i\5\4\c\1\7\y\0\2\l\2\i\c\1\c\v\a\f\n\4\x\j\0\i\u\8\9\2\u\w\t\i\n\q\7\9\y\z\9\p\y\2\a\y\i\d\b\e\w\i\7\8\s\e\q\7\e\q\i\i\0\v\k\f\h\x\f\q\k\t\l\c\q\v\p\0\6\u\8\j\j\a\3\8\k\e\d\m\3\y\s\m\9\z\2\b\d\p\v\v\m\7\v\7\2\4\q\u\w\7\6\b\u\b\g\o\p\m\p\x\5\1\w\f\f\k\g\v\f\3\o\x\7\p\v\f\v\w\9\v\a\g\h\8\o\t\f\i\h\u\b\s\q\x\d\l\n\d\i\l\4\u\d\6\p\9\k\i\h\1\q\b\8\w\7\7\y\9\0\9\l\u\m\x\r\6\9\d\g\u\3\5\7\p\j\n\o\k\f\w\8\f\8\m\5\y\5\x\k\3\5\d\f\p\z\r\j\1\x\f\o\5\x\w\4\6\8\1\l\l\d\l\s\9\4\b\n\m\2\m\k\g\y\1\9\9\d\4\c\s\n\m\d\e\a\9\f\6\m\t\r\x\8\7\b\u\v\l\j\i\w\b\w\q\3\n\n\z\w\v\l\d\t\k\8\h\w\f\y\a\v\g\a\0\l\g\t\y\k\p\q\y\b\n\w\y\i\f\y\8\m\h\s\3\7\1\3\4\h\1\r\e\i\d\u\o\z\h\k\u\y\x\j\8\x\q\x\q\t\q\9\a\m\j\r\h\n\j\r\6\c\d\m\2\k\2\4\g\j\u\y\1\u\g\8\6\u\t\g\s\7\t\k\n\q\m\n\z\w\1\0\6\1\2\i\b\8\n\t\n\p\y\z\9\y\x\f\t\2\q\t\x\2\o\z\k\7\u\3\t\l\k\5\a\n\t\d\6\g\g\c\9\z\w\n\w\k\i\v\j\o\r\x\o\e\n\h\0\s\c\w\v\e\n\m\i\v\r\q\l\0\x\5\6\y\j\e\8\p\2\e\v\4\g\p\l\4\m\9\5\m\h\4\g\c\y\k\6\l\e\v\1\9\q\3\z\s\e\t\r\m\1\d\1\h\v\m\3\7\y\d\l\5\a\c\7\m\5\v\p\n\j\4\y\j\b\u\f\c\h\v\q\d\u\0\8\b\s\r\u\5\r\6\g\l\1\t\r\f\1\r\m\y\f\x\f\4\q\0\4\2\e\7\w\u\m\n\a\v\4\v\b\6\z\k\v\g\q\7\2\r\c\9\4\i\3\8\2\p\7\8\w\k\b\c\1\k\4\p\x\v\e\r\v\r\s\o\m\c\8\i\m\i\8\l\e\n\2\e\a\h\m\j\c\2\5\4\z\j\7\e\0\v\a\i\h\0\s\u\i\g\p\u\1\0\h\o\r\j\w\j\4\t\f\n\z\n\8\n\6\r\2\s\c\m\p\2\z\i\z\9\f\e\8\7\w\0\w\l\j\b\1\1\f\r\t\t\o\2\h\n\z\m\4\6\a\c\o\q\f\9\s\i\l\w\k\r\z\y\r\d\x\j\o\9\b\d\t\g\7\3\b\h\9\e\d\o\l\l\6\9\r\b\v\2\j\h\r\z\u\6\k\6\u\3\l\o\2\u\r\c\d\u\3\u\v\d\m\6\u\j\t\o\x\4\q\e\m\k\d\2\m\9\j\4\1\u\g\k\5\g\k\2\b\e\q\z\o\e\y\w\0\j\z\3\i\0\l\z\0\5\2\n\j\7\1\q\6\y\j\0\t\t\u\d\s\z\o\l\n\0\j\r\1\d\9\1\m\a\n\x\x\e\e\b\o\0\x\3\i\i\3\6\z\f\2\i\4\n\o\w\y\d\k\y\a\b\z\g\p\o\x\c\7\o\0\x\u\v\t\j\m\m\q\p\g\x\9\y\d\0\p\m\6\b\6\t\d\b\3\3\s\2\s\8\d\o\2\1\6\n\w\4\g\f\4\m\6\m\t\m\i\i\x\o\n\e\o\y\k\m\u\8\a\s\z\g\1\u\a\r\t\4\a\m\u\u\9\i\s\z\l\2\f\l\6\7\5\f\l\n\8\s\d\u\y\i\8\q\6\m\k\f\c\g\o\w\y\b\n\6\p\y\t\x\l\p\k\h\t\x\a\y\p\b\i\j\7\t\o\0\y\9\d\z\4\1\q\s\q\x\h\g\6\6\u\s\g\f\7\3\h\n\b\x\o\v\s\5\r\l\5\1\u\y\q\6\v\b\b\w\t\b\x\t\d\3\s\4\c\w\b\o\9\e\5\y\n\l\6\t\w\q\t\z\j\0\3\1\9\e\i\5\x\b\9\k\q\z\d\6\1\a\y\b\d\6\s\2\a\s\u\4\n\4\y\o\j\0\z\x\s\3\n\t\2\r\d\1\a\7\u\a\n\k\3\y\p\s\f\t\c\2\0\1\y\o\t\7\r\e\k\9\y\p\p\v\5\i\l\y\o\n\s\k\y\i\0\y\c\t\y\6\3\e\8\8\s\i\i\z\o\n\v\m\x\v\r\h\v\t\c\7\m\b\w\x\m\j\m\m\s\3\1\9\t\d\1\r\t\0\7\s\t\e\q\m\7\5\a\u\k\i\j\2\6\8\g\g\8\e\h\o\f\l\p\q\o\f\o\u\b\p\o\j\s\f\j\4\o\y\h\i\b\f\6\p\7\q\7\9\t\p\h\c\q\u\2\r\c\h\s\k\x\0\6\a\e\0\a\o\o\j\w\z\d\9\9\b\h\v\b\m\1\1\c\q\m\m\v\e\1\u\j\l\d\2\8\t\m\c\u\c\1\c\7\x\d\z\q\f\3\l\w\k\b\3\p\8\x\3\t\n\3\t\u\6\5\q\b\h\h\g\e\0\z\j\k\x\2\e\g\a\2\0\v\p\p\p\5\4\q\a\9\0\g\p\b\9\4\m\q\2\l\k\j\y\3\2\c\w\5\7\x\u\l\4\q\g\f\1\z\y\k\h\s\z\4\p\x\b\u\b\9\u\6\p\o\5\7\5\o\l\q\1\s\d\e\4\q\h\2\x\p\1\d\u\s\4\i\z\d\f\a\a\w\n\l\5\h\z\l\7\r\w\j\x\p\s\t\y\v\c\f\r\3\f\w\9\c\9\k\l\2\a\w\p\c\c\y\i\1\0\p\d\p\r\6\4\t\p\e\a\g\m\c\6\n\g\q\s\g\2\p\0\y\f\2\c\7\t\c\u\8\c\7\v\e\t\f\e\z\q\e\1\9\l\6\e\y\i\y\z\v\m\9\s\u\p\b\q\4\s\s\7\w\4\3\7\p\3\f\5\d\n\b\e\l\w\i\9\n\c\v\p\o\i\z\i\f\n\8\q\5\o\b\m\c\f\h\8\r\c\c\y\b\c\x\r\l\8\5\4\2\i\1\t\k\m\2\d\i\n\9\t\s\b\f\7\t\b\t\g\n\a\r\s\4\c\x\f\t\t\a\t\a\m\f\x\b\e\4\e\e\m\w\i\c\z\k\x\v\9\q\l\w\i\l\y\j\9\7\j\u\t\i\4\b\z\v\g\b\y\c\5\t\b\8\q\m\b\e\u\r\3\b\p\a\f\m\t\v\u\w\6\d\c\r\m\g\v\c\y\n\j\w\4\h\o\c\s\3\d\d\y\t\p\z\u\n\q\z\p\q\y\d\1\h\e\9\q\s\h\m\6\w\g\1\u\h\7\7\b\7\2\3\u\e\2\l\p\z\6\h\5\u\v\2\o\k\7\8\l\8\h\c\u\a\3\8\m\w\u\l\3\c\y\o\b\0\u\3\s\b\k\a\5\2\3\n\z\e\e\p\i\d\d\t\1\p\u\9\v\j\i\u\9\d\k\t\b\3\1\2\c\v\c\x\g\7\m\9\9\a\6\0\r\9\7\t\j\6\b\0\a\g\e\7\b\z\j\7\8\l\u\2\w\y\5\2\w\j\3\d\8\c\f\1\b\x\4\t\5\s\s\v\z\j\f\s\1\5\z\s\8\v\f\j\m\l\7\t\1\y\p\f\k\h\7\6\k\h\8\l\u\1\g\j\q\5\k\m\a\z\q\b\k\k\u\k\5\8\1\i\u\i\4\e\x\b\i\i\x\k\k\x\u\g\h\a\f\t\6\7\j\c\d\8\e\f\x\a\3\g\u\v\q\7\k\w\b\n\e\n\l\6\m\u\a\m\c\1\f\p\n\p\u\9\6\h\x\2\h\j\r\t\0\e\i\d\g\q\s\i\4\z\u\s\c\a\z\8\1\m\8\q\o\j\w\g\h\b\o\6\z\d\c\5\l\w\m\v\2\h\e\3\b\q\j\v\t\h\4\a\j\i\1\8\t\l\2\v\2\z\o\w\r\4\v\f\3\k\f\d\2\x\r\4\d\l\y\7\o\2\r\q\7\w\v\l\n\s\2\5\b\7\k\v\m\3\i\e\i\t\g\r\f\1\8\3\m\k\y\s\3\h\n\3\c\c\k\f\q\g\8\p\i\e\o\x\x\d\j\5\6\e\s\s\3\n\z\1\z\q\f\2\q\f\y\u\6\w\z\p\q\6\5\k\y\3\d\q\n\y\j\e\t\y\o\y\5\y\6\y\m\t\p\6\3\0\a\x\c\m\r\n\f\y\8\k\0\z\h\k\k\s\p\p\9\j\j\f\k\w\l\8\3\n\w\2\6\r\j\5\x\b\s\u\b\p\p\q\h\q\z\1\y\s\p\9\h\x\n\h\t\r\f\2\7\s\s\c\w\7\9\q\y\j\j\y\2\f\4\t\a\e\5\p\n\f\5\f\y\1\z\f\w\2\x\o\r\x\b\3\8\t\b\s\s\d\k\0\k\h\i\n\q\d\i\o\7\9\1\c\d\s\t\8\3\4\7\r\6\l\o\w\h\l\2\i\z\p\r\z\d\l\v\l\8\e\2\2\d\e\p\r\f\l\i\o\a\u\j\c\m\z\u\s\n\j\x\i\c\r\8\f\a\j\1\s\g\q\x\t\p\s\z\1\n\r\8\o\9\g\m\s\5\g\7\v\g\p\m\s\y\l\x\1\s\m\3\j\e\4\x\o\z\5\8\f\2\8\l\e\7\4\x\c\2\8\e\4\d\i\t\w\w\c\g\j\j\w\8\z\9\l\g\x\c\9\d\5\m\5\1\t\l\e\m\4\1\9\y\7\v\w\l\b\n\1\9\u\l\v\0\a\s\t\9\d\5\f\j\6\n\8\1\n\l\8\m\4\f\a\9\o\7\b\1\o\i\w\5\j\d\e\3\9\k\c\k\w\v\6\a\n\y\k\0\l\f\b\t\3\2\o\q\3\q\8\i\k\c\f\o\y\k\d\4\d\w\w\o\6\w\8\e\1\7\s\4\h\c\u\u\k\4\x\7\w\5\f\r\z\l\x\5\0\v\6\w\t\x\w\3\j\l\a\k\w\o\p\t\a\5\4\r\n\u\x\7\j\0\t\u\d\w\p\b\x\n\j\m\7\3\0\u\3\q\a\d\5\o\t\0\t\d\9\x\v\6\m\b\v\h\n\k\u\e\0\g\u\8\x\d\1\3\9\r\d\p\b\3\7\w\b\c\d\8\d\p\9\t\k\y\b\i\o\9\b\7\a\g\z\h\h\l\x\9\a\e\s\m\i\0\x\j\3\1\b\x\i\9\s\s\8\t\r\b\r\h\i\e\i\l\1\p\s\r\r\4\k\y\k\8\v\z\b\3\3\g\f\y\e\n\8\n\p\w\s\n\o\g\1\o\2\k\k\k\l\1\6\i\v\n\d\v\q\w\3\t\q\6\j\y\3\e\a\4\8\z\j\0\1\n\c\t\n\z\2\x\f\9\g\f\2\0\m\j\e\9\0\w\w\9\q\w\q\7\f\5\x\a\0\p\n\8\z\c\o\w\v\w\y\s\o\y\z\w\j\u\a\8\i\e\j\d\4\f\k\7\l\d\8\r\y\4\p\x\w\3\s\p\j\6\3\8\2\k\3\h\8\h\r\j\c\8\g\n\h\b\k\r\c\q\5\9\w\7\d\g\f\q\4\2\x\u\9\p\5\j\x\n\m\s\v\u\x\x\q\s\y\d\e\q\g\b\e\1\f\p\d\a\k\8\x\a\4\9\o\4\z\y\s\d\m\w\7\4\d\f\n\s\8\n\a\e\9\d\i\e\z\z\c\8\p\b\2\j\l\k\p\v\8\7\q\y\z\v\c\b\1\6\7\9\h\a\2\p\y\l\a\i\t\j\k\n\k\b\m\i\a\s\r\q\1\j\8\0\e\9\3\o\h\w\x\2\q\a\b\8\2\e\u\2\5\0\o\q\s\e\z\l\t\5\x\i\8\l\v\r\o\8\c\h\j\u\f\f\5\6\h\3\l\b\x\6\q\9\v\k\s\s\1\e\x\q\t\s\z\v\c\u\4\4\t\m\u\j\c\v\a\m\3\b\k\y\r\o\l\3\o\k\g\v\2\v\c\p\i\q\a\5\w\d\7\3\d\2\e\6\0\t\c\y\i\o\m\b\d\j\5\s\l\u\7\4\d\d\s\i\x\i\h\v\g\p\t\l\5\v\x\n\5\t\v\y\p\5\k\j\l\v\s\g\l\y\r\n\u\q\0\k\l\4\0\k\f\p\4\a\h\l\v\v\n\r\t\y\1\i\q\1\0\z\g\s\f\4\l\h\j\3\p\q\w\h\l\b\q\v\t\t\d\q\8\3\x\w\n\8\r\2\f\d\s\i\n\x\t\l\a\a\f\n\z\5\l\r\f\1\w\y\k\y\4\c\6\s\n\q\x\3\p\r\q\k\9\v\t\l\0\d\e\x\f\0\l\h\v\i\0\e\6\z\y\p\v\8\j\p\m\8\l\9\c\1\6\p\o\p\4\4\a\r\n\l\j\0\s\g\s\1\c\5\b\v\l\b\k\e\w\g\i\5\h\6\t\q\2\g\g\k\x\9\p\n\v\3\0\f\l\4\w\p\m\b\b\t\r\y\w\y\8\t\u\a\j\9\v\t\n\f\g\4\g\l\z\v\2\z\n\b\l\s\6\c\j\9\n\d\n\7\o\d\k\k\x\4\b\4\o\2\x\p\i\0\c\7\d\a\8\o\2\f\d\3\2\0\k\r\u\d\k\v\a\7\6\e\0\8\2\3\h\d\x\6\j\u\o\8\f\m\9\s\x\o\w\8\1\m\2\f\j\z\b\e\2\6\y\u\9\5\v\k\p\w\9\i\f\d\v\b\p\t\i\0\b\3\k\6\x\4\i\a\j\j\1\i\t\p\a\n\e\h\z\d\g\c\l\e\4\x\k\8\p\6\6\1\i\h\0\1\2\b\8\u\r\6\o\q\6\s\p\2\5\f\x\e\4\4\t\x\l\z\b\1\s\d\7\g\j\g\k\5\6\x\4\5\z\x\q\g\4\3\5\v\s\m\g\q\f\m\v\3\r\p\4\y\t\0\w\x\m\v\3\l\n\6\v\0\n\q\x\9\p\e\e\j\9\q\b\u\8\v\n\b\9\9\k\b\r\4\z\i\f\l\k\5\a\h\c\1\o\v\2\u\q\4\r\1\5\3\s\t\n\5\6\9\5\h\x\j\c\g\o\2\x\j\t\v\4\i\w\o\w\e\k\m\t\d\o\v\o\w\f\p\0\j\e\0\r\t\p\u\6\v\v\j\2\n\f\k\0\n\w\k\h\a\j\y\q\s\u\v\1\m\n\m\i\v\0\z\b\p\h\6\1\0\s\h\w\s\j\n\d\r\s\q\k\c\w\a\c\g\y\6\h\3\o\1\9\8\b\n\c\w\a\k\2\d\v\z\6\3\q\d\3\t\5\9\u\7\o\h\d\k\1\z\3\8\3\t\n\2\a\2\f\x\4\p\e\e\d\p\z\h\7\w\q\w\i\o\p\y\5\d\p\k\b\w\4\4\z\v\h\a\5\n\q\p\f\e\e\7\2\h\v\k\1\d\9\a\2\z\s\e\p\u\3\v\v\g\9\s\c\e\b\5\4\f\o\x\h\4\j\j\z\j\v\2\m\q\v\w\q\y\9\a\a\4\c\h\1\u\d\6\x\7\e\2\a\1\d\l\b\b\c\v\d\1\3\p\b\v\1\d\l\o\2\6\4\5\s\5\l\k\y\2\0\u\l\v\d\y\4\t\0\3\w\2\4\6\i\o\c\f\s\x\x\t\y\1\z\w\i\t\m\8\u\h\s\4\h\d\t\e ]] 00:08:00.549 00:08:00.549 real 0m3.048s 00:08:00.549 user 0m2.561s 00:08:00.549 sys 0m1.584s 00:08:00.549 ************************************ 00:08:00.549 END TEST dd_rw_offset 00:08:00.549 ************************************ 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw.dd_rw_offset -- common/autotest_common.sh@10 -- # set +x 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@1 -- # cleanup 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@76 -- # clear_nvme Nvme0n1 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@11 -- # local nvme_ref= 00:08:00.549 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@12 -- # local size=0xffff 00:08:00.550 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@14 -- # local bs=1048576 00:08:00.550 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@15 -- # local count=1 00:08:00.550 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=1 --json /dev/fd/62 00:08:00.550 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@18 -- # gen_conf 00:08:00.550 01:28:08 spdk_dd.spdk_dd_basic_rw -- dd/common.sh@31 -- # xtrace_disable 00:08:00.550 01:28:08 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:00.550 { 00:08:00.550 "subsystems": [ 00:08:00.550 { 00:08:00.550 "subsystem": "bdev", 00:08:00.550 "config": [ 00:08:00.550 { 00:08:00.550 "params": { 00:08:00.550 "trtype": "pcie", 00:08:00.550 "traddr": "0000:00:10.0", 00:08:00.550 "name": "Nvme0" 00:08:00.550 }, 00:08:00.550 "method": "bdev_nvme_attach_controller" 00:08:00.550 }, 00:08:00.550 { 00:08:00.550 "method": "bdev_wait_for_examine" 00:08:00.550 } 00:08:00.550 ] 00:08:00.550 } 00:08:00.550 ] 00:08:00.550 } 00:08:00.550 [2024-11-17 01:28:08.989972] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:00.550 [2024-11-17 01:28:08.990114] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61487 ] 00:08:00.814 [2024-11-17 01:28:09.153787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.814 [2024-11-17 01:28:09.252893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.078 [2024-11-17 01:28:09.410100] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:01.337  [2024-11-17T01:28:10.733Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:08:02.274 00:08:02.274 01:28:10 spdk_dd.spdk_dd_basic_rw -- dd/basic_rw.sh@77 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.274 ************************************ 00:08:02.274 END TEST spdk_dd_basic_rw 00:08:02.274 ************************************ 00:08:02.274 00:08:02.274 real 0m35.975s 00:08:02.274 user 0m29.774s 00:08:02.274 sys 0m16.460s 00:08:02.274 01:28:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.274 01:28:10 spdk_dd.spdk_dd_basic_rw -- common/autotest_common.sh@10 -- # set +x 00:08:02.274 01:28:10 spdk_dd -- dd/dd.sh@21 -- # run_test spdk_dd_posix /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:02.274 01:28:10 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.274 01:28:10 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.274 01:28:10 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:02.274 ************************************ 00:08:02.274 START TEST spdk_dd_posix 00:08:02.274 ************************************ 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/posix.sh 00:08:02.274 * Looking for test storage... 00:08:02.274 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@344 -- # case "$op" in 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@345 -- # : 1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # decimal 1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # decimal 2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@353 -- # local d=2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@355 -- # echo 2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@368 -- # return 0 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.274 --rc genhtml_branch_coverage=1 00:08:02.274 --rc genhtml_function_coverage=1 00:08:02.274 --rc genhtml_legend=1 00:08:02.274 --rc geninfo_all_blocks=1 00:08:02.274 --rc geninfo_unexecuted_blocks=1 00:08:02.274 00:08:02.274 ' 00:08:02.274 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.274 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.274 --rc genhtml_branch_coverage=1 00:08:02.274 --rc genhtml_function_coverage=1 00:08:02.274 --rc genhtml_legend=1 00:08:02.274 --rc geninfo_all_blocks=1 00:08:02.275 --rc geninfo_unexecuted_blocks=1 00:08:02.275 00:08:02.275 ' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.275 --rc genhtml_branch_coverage=1 00:08:02.275 --rc genhtml_function_coverage=1 00:08:02.275 --rc genhtml_legend=1 00:08:02.275 --rc geninfo_all_blocks=1 00:08:02.275 --rc geninfo_unexecuted_blocks=1 00:08:02.275 00:08:02.275 ' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.275 --rc genhtml_branch_coverage=1 00:08:02.275 --rc genhtml_function_coverage=1 00:08:02.275 --rc genhtml_legend=1 00:08:02.275 --rc geninfo_all_blocks=1 00:08:02.275 --rc geninfo_unexecuted_blocks=1 00:08:02.275 00:08:02.275 ' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- paths/export.sh@5 -- # export PATH 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@121 -- # msg[0]=', using AIO' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@122 -- # msg[1]=', liburing in use' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@123 -- # msg[2]=', disabling liburing, forcing AIO' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@125 -- # trap cleanup EXIT 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@127 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@128 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@130 -- # tests 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@99 -- # printf '* First test run%s\n' ', liburing in use' 00:08:02.275 * First test run, liburing in use 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- dd/posix.sh@102 -- # run_test dd_flag_append append 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:02.275 ************************************ 00:08:02.275 START TEST dd_flag_append 00:08:02.275 ************************************ 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1129 -- # append 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@16 -- # local dump0 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@17 -- # local dump1 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # gen_bytes 32 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@19 -- # dump0=m5jh0pdtz5gdruwf6lg9akh9yh03vsag 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # gen_bytes 32 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/common.sh@98 -- # xtrace_disable 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@20 -- # dump1=3ki7ezyk0umgmi03po543qlhh1zq394o 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@22 -- # printf %s m5jh0pdtz5gdruwf6lg9akh9yh03vsag 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@23 -- # printf %s 3ki7ezyk0umgmi03po543qlhh1zq394o 00:08:02.275 01:28:10 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:02.535 [2024-11-17 01:28:10.795190] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:02.535 [2024-11-17 01:28:10.795371] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61571 ] 00:08:02.535 [2024-11-17 01:28:10.968718] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.794 [2024-11-17 01:28:11.050257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.794 [2024-11-17 01:28:11.203279] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:03.054  [2024-11-17T01:28:12.450Z] Copying: 32/32 [B] (average 31 kBps) 00:08:03.991 00:08:03.991 ************************************ 00:08:03.991 END TEST dd_flag_append 00:08:03.991 ************************************ 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_append -- dd/posix.sh@27 -- # [[ 3ki7ezyk0umgmi03po543qlhh1zq394om5jh0pdtz5gdruwf6lg9akh9yh03vsag == \3\k\i\7\e\z\y\k\0\u\m\g\m\i\0\3\p\o\5\4\3\q\l\h\h\1\z\q\3\9\4\o\m\5\j\h\0\p\d\t\z\5\g\d\r\u\w\f\6\l\g\9\a\k\h\9\y\h\0\3\v\s\a\g ]] 00:08:03.991 00:08:03.991 real 0m1.431s 00:08:03.991 user 0m1.142s 00:08:03.991 sys 0m0.770s 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_append -- common/autotest_common.sh@10 -- # set +x 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix -- dd/posix.sh@103 -- # run_test dd_flag_directory directory 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:03.991 ************************************ 00:08:03.991 START TEST dd_flag_directory 00:08:03.991 ************************************ 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1129 -- # directory 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:03.991 01:28:12 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:03.991 [2024-11-17 01:28:12.278509] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:03.991 [2024-11-17 01:28:12.278897] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61606 ] 00:08:04.250 [2024-11-17 01:28:12.461895] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.250 [2024-11-17 01:28:12.556235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.508 [2024-11-17 01:28:12.715042] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:04.508 [2024-11-17 01:28:12.797638] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:04.508 [2024-11-17 01:28:12.797715] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:04.509 [2024-11-17 01:28:12.797739] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:05.075 [2024-11-17 01:28:13.420194] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@652 -- # local es=0 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:05.334 01:28:13 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:05.334 [2024-11-17 01:28:13.789533] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:05.334 [2024-11-17 01:28:13.790017] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61633 ] 00:08:05.593 [2024-11-17 01:28:13.973860] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.850 [2024-11-17 01:28:14.065561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.850 [2024-11-17 01:28:14.209251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:05.850 [2024-11-17 01:28:14.288302] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.850 [2024-11-17 01:28:14.288379] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:05.850 [2024-11-17 01:28:14.288403] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:06.418 [2024-11-17 01:28:14.866754] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@655 -- # es=236 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@664 -- # es=108 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@665 -- # case "$es" in 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@672 -- # es=1 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:06.677 00:08:06.677 real 0m2.925s 00:08:06.677 user 0m2.334s 00:08:06.677 sys 0m0.369s 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.677 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_directory -- common/autotest_common.sh@10 -- # set +x 00:08:06.677 ************************************ 00:08:06.677 END TEST dd_flag_directory 00:08:06.677 ************************************ 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix -- dd/posix.sh@104 -- # run_test dd_flag_nofollow nofollow 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:06.937 ************************************ 00:08:06.937 START TEST dd_flag_nofollow 00:08:06.937 ************************************ 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1129 -- # nofollow 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:06.937 01:28:15 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:06.937 [2024-11-17 01:28:15.265843] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:06.937 [2024-11-17 01:28:15.266006] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61668 ] 00:08:07.196 [2024-11-17 01:28:15.438046] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.196 [2024-11-17 01:28:15.525145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.455 [2024-11-17 01:28:15.684718] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:07.455 [2024-11-17 01:28:15.766424] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:07.455 [2024-11-17 01:28:15.766503] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:07.455 [2024-11-17 01:28:15.766526] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:08.024 [2024-11-17 01:28:16.354093] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@652 -- # local es=0 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:08.283 01:28:16 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:08.283 [2024-11-17 01:28:16.697775] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:08.283 [2024-11-17 01:28:16.698227] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61695 ] 00:08:08.543 [2024-11-17 01:28:16.859743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.543 [2024-11-17 01:28:16.944174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.802 [2024-11-17 01:28:17.102627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:08.802 [2024-11-17 01:28:17.184966] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:08.802 [2024-11-17 01:28:17.185043] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:08.802 [2024-11-17 01:28:17.185095] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:09.371 [2024-11-17 01:28:17.788020] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@655 -- # es=216 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@664 -- # es=88 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@665 -- # case "$es" in 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@672 -- # es=1 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@46 -- # gen_bytes 512 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/common.sh@98 -- # xtrace_disable 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:09.630 01:28:18 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:09.889 [2024-11-17 01:28:18.148323] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:09.889 [2024-11-17 01:28:18.148493] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61709 ] 00:08:09.889 [2024-11-17 01:28:18.325857] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.148 [2024-11-17 01:28:18.413598] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.148 [2024-11-17 01:28:18.571716] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:10.408  [2024-11-17T01:28:19.833Z] Copying: 512/512 [B] (average 500 kBps) 00:08:11.374 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- dd/posix.sh@49 -- # [[ wjxtsykvoycrr5nowuzauggrkhmjje94k884rfzi96efzy8t1795k98zioxn6yb3agibi4nbsprxkv5gx7p3v32tg87ed3hofy7vzk6r8ivwfiwhzrmrk8rqphro4ujb1oojztcllp1cijcacvisbup00ybc9s7nbf5sodxt7yg5g5jv5scbkr9tecv1zc7qsvzqs44wy2gqgx1euchs6vjqsp5x050n5jifum2290dpuldxwsyutyuexikq3r0ewsu0za4nxzwamljs5jr91dnd3xxh5m9d5w9z1ylonju9bleiiipmq8koe5ly2qgbms280zxtfxo6nbdffgixiwbyk5hk40k36os2yngf9k4lp1or0x6xv8va91c8tx2seozo0g8i0mtmurgwm3j3brqs8g0ko6bzlq9p8e8khnlcn2z0l3w8p0006cedmfmfue4opdzi4tlpipneudz13av3yj8wv3gnvcgquoramjyymbh4k646lfsm24nwyj14 == \w\j\x\t\s\y\k\v\o\y\c\r\r\5\n\o\w\u\z\a\u\g\g\r\k\h\m\j\j\e\9\4\k\8\8\4\r\f\z\i\9\6\e\f\z\y\8\t\1\7\9\5\k\9\8\z\i\o\x\n\6\y\b\3\a\g\i\b\i\4\n\b\s\p\r\x\k\v\5\g\x\7\p\3\v\3\2\t\g\8\7\e\d\3\h\o\f\y\7\v\z\k\6\r\8\i\v\w\f\i\w\h\z\r\m\r\k\8\r\q\p\h\r\o\4\u\j\b\1\o\o\j\z\t\c\l\l\p\1\c\i\j\c\a\c\v\i\s\b\u\p\0\0\y\b\c\9\s\7\n\b\f\5\s\o\d\x\t\7\y\g\5\g\5\j\v\5\s\c\b\k\r\9\t\e\c\v\1\z\c\7\q\s\v\z\q\s\4\4\w\y\2\g\q\g\x\1\e\u\c\h\s\6\v\j\q\s\p\5\x\0\5\0\n\5\j\i\f\u\m\2\2\9\0\d\p\u\l\d\x\w\s\y\u\t\y\u\e\x\i\k\q\3\r\0\e\w\s\u\0\z\a\4\n\x\z\w\a\m\l\j\s\5\j\r\9\1\d\n\d\3\x\x\h\5\m\9\d\5\w\9\z\1\y\l\o\n\j\u\9\b\l\e\i\i\i\p\m\q\8\k\o\e\5\l\y\2\q\g\b\m\s\2\8\0\z\x\t\f\x\o\6\n\b\d\f\f\g\i\x\i\w\b\y\k\5\h\k\4\0\k\3\6\o\s\2\y\n\g\f\9\k\4\l\p\1\o\r\0\x\6\x\v\8\v\a\9\1\c\8\t\x\2\s\e\o\z\o\0\g\8\i\0\m\t\m\u\r\g\w\m\3\j\3\b\r\q\s\8\g\0\k\o\6\b\z\l\q\9\p\8\e\8\k\h\n\l\c\n\2\z\0\l\3\w\8\p\0\0\0\6\c\e\d\m\f\m\f\u\e\4\o\p\d\z\i\4\t\l\p\i\p\n\e\u\d\z\1\3\a\v\3\y\j\8\w\v\3\g\n\v\c\g\q\u\o\r\a\m\j\y\y\m\b\h\4\k\6\4\6\l\f\s\m\2\4\n\w\y\j\1\4 ]] 00:08:11.374 00:08:11.374 real 0m4.391s 00:08:11.374 user 0m3.505s 00:08:11.374 sys 0m1.195s 00:08:11.374 ************************************ 00:08:11.374 END TEST dd_flag_nofollow 00:08:11.374 ************************************ 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_nofollow -- common/autotest_common.sh@10 -- # set +x 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix -- dd/posix.sh@105 -- # run_test dd_flag_noatime noatime 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:11.374 ************************************ 00:08:11.374 START TEST dd_flag_noatime 00:08:11.374 ************************************ 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1129 -- # noatime 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@53 -- # local atime_if 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@54 -- # local atime_of 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@58 -- # gen_bytes 512 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/common.sh@98 -- # xtrace_disable 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@60 -- # atime_if=1731806898 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@61 -- # atime_of=1731806899 00:08:11.374 01:28:19 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@66 -- # sleep 1 00:08:12.311 01:28:20 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:12.311 [2024-11-17 01:28:20.724493] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:12.311 [2024-11-17 01:28:20.724673] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61764 ] 00:08:12.570 [2024-11-17 01:28:20.910581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.831 [2024-11-17 01:28:21.035978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.831 [2024-11-17 01:28:21.209231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:13.090  [2024-11-17T01:28:22.485Z] Copying: 512/512 [B] (average 500 kBps) 00:08:14.026 00:08:14.026 01:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:14.026 01:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@69 -- # (( atime_if == 1731806898 )) 00:08:14.026 01:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.026 01:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@70 -- # (( atime_of == 1731806899 )) 00:08:14.026 01:28:22 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:14.026 [2024-11-17 01:28:22.282853] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:14.026 [2024-11-17 01:28:22.283034] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:08:14.026 [2024-11-17 01:28:22.462388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:14.285 [2024-11-17 01:28:22.551027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:14.285 [2024-11-17 01:28:22.698627] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:14.543  [2024-11-17T01:28:23.938Z] Copying: 512/512 [B] (average 500 kBps) 00:08:15.479 00:08:15.479 01:28:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:15.479 01:28:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- dd/posix.sh@73 -- # (( atime_if < 1731806902 )) 00:08:15.479 00:08:15.479 real 0m4.054s 00:08:15.479 user 0m2.435s 00:08:15.479 sys 0m1.656s 00:08:15.479 01:28:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:15.479 01:28:23 spdk_dd.spdk_dd_posix.dd_flag_noatime -- common/autotest_common.sh@10 -- # set +x 00:08:15.479 ************************************ 00:08:15.479 END TEST dd_flag_noatime 00:08:15.479 ************************************ 00:08:15.479 01:28:23 spdk_dd.spdk_dd_posix -- dd/posix.sh@106 -- # run_test dd_flags_misc io 00:08:15.479 01:28:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:15.480 ************************************ 00:08:15.480 START TEST dd_flags_misc 00:08:15.480 ************************************ 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1129 -- # io 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:15.480 01:28:23 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:15.480 [2024-11-17 01:28:23.788599] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:15.480 [2024-11-17 01:28:23.788737] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61830 ] 00:08:15.738 [2024-11-17 01:28:23.953692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.738 [2024-11-17 01:28:24.037616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.997 [2024-11-17 01:28:24.196564] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:15.997  [2024-11-17T01:28:25.394Z] Copying: 512/512 [B] (average 500 kBps) 00:08:16.935 00:08:16.935 01:28:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9qwwsopjac551wk3cagjh50sr3keam9cmpf00a6lmhmk6tz600ih8oe4e1pwpf76la9h0sxylsa7ojx3zltcl20ak4phkza3upsaksgiwzyf8jqc0uctzcs8t5q1nynrds1f8rwqdgb6q4kon6uts4qggreojfey727ir61orsm7bpotb8h378p4l65tbw6oczbvqu497nj5yudm9lgjnnz5zdajpo684n2zljizv5xtvndjuloqf4enmnimdhllnizrbzu1ikl3k6dbt228218dlmjvw1n2p5vuzimp68px4pgxhc5w7961hxj6f8n63cdnv4n521imz44nt7fp8tzhs4okbi6n9pnfcql9jqtqgoit17h7cuxz4hmfgdfmoxzxaf9uz2wwn1ast0nuii7o4e8nzvc987xz9dh11r59rgjvrbhwgghejarxs01kxz5uvk7vsenh368hy4tjfw3slrkdk4bax1uidytt7xinsujqifrvsh8vwbo6xif7 == \9\q\w\w\s\o\p\j\a\c\5\5\1\w\k\3\c\a\g\j\h\5\0\s\r\3\k\e\a\m\9\c\m\p\f\0\0\a\6\l\m\h\m\k\6\t\z\6\0\0\i\h\8\o\e\4\e\1\p\w\p\f\7\6\l\a\9\h\0\s\x\y\l\s\a\7\o\j\x\3\z\l\t\c\l\2\0\a\k\4\p\h\k\z\a\3\u\p\s\a\k\s\g\i\w\z\y\f\8\j\q\c\0\u\c\t\z\c\s\8\t\5\q\1\n\y\n\r\d\s\1\f\8\r\w\q\d\g\b\6\q\4\k\o\n\6\u\t\s\4\q\g\g\r\e\o\j\f\e\y\7\2\7\i\r\6\1\o\r\s\m\7\b\p\o\t\b\8\h\3\7\8\p\4\l\6\5\t\b\w\6\o\c\z\b\v\q\u\4\9\7\n\j\5\y\u\d\m\9\l\g\j\n\n\z\5\z\d\a\j\p\o\6\8\4\n\2\z\l\j\i\z\v\5\x\t\v\n\d\j\u\l\o\q\f\4\e\n\m\n\i\m\d\h\l\l\n\i\z\r\b\z\u\1\i\k\l\3\k\6\d\b\t\2\2\8\2\1\8\d\l\m\j\v\w\1\n\2\p\5\v\u\z\i\m\p\6\8\p\x\4\p\g\x\h\c\5\w\7\9\6\1\h\x\j\6\f\8\n\6\3\c\d\n\v\4\n\5\2\1\i\m\z\4\4\n\t\7\f\p\8\t\z\h\s\4\o\k\b\i\6\n\9\p\n\f\c\q\l\9\j\q\t\q\g\o\i\t\1\7\h\7\c\u\x\z\4\h\m\f\g\d\f\m\o\x\z\x\a\f\9\u\z\2\w\w\n\1\a\s\t\0\n\u\i\i\7\o\4\e\8\n\z\v\c\9\8\7\x\z\9\d\h\1\1\r\5\9\r\g\j\v\r\b\h\w\g\g\h\e\j\a\r\x\s\0\1\k\x\z\5\u\v\k\7\v\s\e\n\h\3\6\8\h\y\4\t\j\f\w\3\s\l\r\k\d\k\4\b\a\x\1\u\i\d\y\t\t\7\x\i\n\s\u\j\q\i\f\r\v\s\h\8\v\w\b\o\6\x\i\f\7 ]] 00:08:16.935 01:28:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:16.935 01:28:25 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:16.935 [2024-11-17 01:28:25.238129] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:16.935 [2024-11-17 01:28:25.238314] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61851 ] 00:08:17.193 [2024-11-17 01:28:25.418107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.193 [2024-11-17 01:28:25.506487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.452 [2024-11-17 01:28:25.670585] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:17.452  [2024-11-17T01:28:26.848Z] Copying: 512/512 [B] (average 500 kBps) 00:08:18.389 00:08:18.389 01:28:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9qwwsopjac551wk3cagjh50sr3keam9cmpf00a6lmhmk6tz600ih8oe4e1pwpf76la9h0sxylsa7ojx3zltcl20ak4phkza3upsaksgiwzyf8jqc0uctzcs8t5q1nynrds1f8rwqdgb6q4kon6uts4qggreojfey727ir61orsm7bpotb8h378p4l65tbw6oczbvqu497nj5yudm9lgjnnz5zdajpo684n2zljizv5xtvndjuloqf4enmnimdhllnizrbzu1ikl3k6dbt228218dlmjvw1n2p5vuzimp68px4pgxhc5w7961hxj6f8n63cdnv4n521imz44nt7fp8tzhs4okbi6n9pnfcql9jqtqgoit17h7cuxz4hmfgdfmoxzxaf9uz2wwn1ast0nuii7o4e8nzvc987xz9dh11r59rgjvrbhwgghejarxs01kxz5uvk7vsenh368hy4tjfw3slrkdk4bax1uidytt7xinsujqifrvsh8vwbo6xif7 == \9\q\w\w\s\o\p\j\a\c\5\5\1\w\k\3\c\a\g\j\h\5\0\s\r\3\k\e\a\m\9\c\m\p\f\0\0\a\6\l\m\h\m\k\6\t\z\6\0\0\i\h\8\o\e\4\e\1\p\w\p\f\7\6\l\a\9\h\0\s\x\y\l\s\a\7\o\j\x\3\z\l\t\c\l\2\0\a\k\4\p\h\k\z\a\3\u\p\s\a\k\s\g\i\w\z\y\f\8\j\q\c\0\u\c\t\z\c\s\8\t\5\q\1\n\y\n\r\d\s\1\f\8\r\w\q\d\g\b\6\q\4\k\o\n\6\u\t\s\4\q\g\g\r\e\o\j\f\e\y\7\2\7\i\r\6\1\o\r\s\m\7\b\p\o\t\b\8\h\3\7\8\p\4\l\6\5\t\b\w\6\o\c\z\b\v\q\u\4\9\7\n\j\5\y\u\d\m\9\l\g\j\n\n\z\5\z\d\a\j\p\o\6\8\4\n\2\z\l\j\i\z\v\5\x\t\v\n\d\j\u\l\o\q\f\4\e\n\m\n\i\m\d\h\l\l\n\i\z\r\b\z\u\1\i\k\l\3\k\6\d\b\t\2\2\8\2\1\8\d\l\m\j\v\w\1\n\2\p\5\v\u\z\i\m\p\6\8\p\x\4\p\g\x\h\c\5\w\7\9\6\1\h\x\j\6\f\8\n\6\3\c\d\n\v\4\n\5\2\1\i\m\z\4\4\n\t\7\f\p\8\t\z\h\s\4\o\k\b\i\6\n\9\p\n\f\c\q\l\9\j\q\t\q\g\o\i\t\1\7\h\7\c\u\x\z\4\h\m\f\g\d\f\m\o\x\z\x\a\f\9\u\z\2\w\w\n\1\a\s\t\0\n\u\i\i\7\o\4\e\8\n\z\v\c\9\8\7\x\z\9\d\h\1\1\r\5\9\r\g\j\v\r\b\h\w\g\g\h\e\j\a\r\x\s\0\1\k\x\z\5\u\v\k\7\v\s\e\n\h\3\6\8\h\y\4\t\j\f\w\3\s\l\r\k\d\k\4\b\a\x\1\u\i\d\y\t\t\7\x\i\n\s\u\j\q\i\f\r\v\s\h\8\v\w\b\o\6\x\i\f\7 ]] 00:08:18.389 01:28:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:18.389 01:28:26 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:18.389 [2024-11-17 01:28:26.758857] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:18.389 [2024-11-17 01:28:26.758996] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61873 ] 00:08:18.646 [2024-11-17 01:28:26.928887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.646 [2024-11-17 01:28:27.014698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.905 [2024-11-17 01:28:27.178022] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:18.905  [2024-11-17T01:28:28.301Z] Copying: 512/512 [B] (average 166 kBps) 00:08:19.842 00:08:19.842 01:28:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9qwwsopjac551wk3cagjh50sr3keam9cmpf00a6lmhmk6tz600ih8oe4e1pwpf76la9h0sxylsa7ojx3zltcl20ak4phkza3upsaksgiwzyf8jqc0uctzcs8t5q1nynrds1f8rwqdgb6q4kon6uts4qggreojfey727ir61orsm7bpotb8h378p4l65tbw6oczbvqu497nj5yudm9lgjnnz5zdajpo684n2zljizv5xtvndjuloqf4enmnimdhllnizrbzu1ikl3k6dbt228218dlmjvw1n2p5vuzimp68px4pgxhc5w7961hxj6f8n63cdnv4n521imz44nt7fp8tzhs4okbi6n9pnfcql9jqtqgoit17h7cuxz4hmfgdfmoxzxaf9uz2wwn1ast0nuii7o4e8nzvc987xz9dh11r59rgjvrbhwgghejarxs01kxz5uvk7vsenh368hy4tjfw3slrkdk4bax1uidytt7xinsujqifrvsh8vwbo6xif7 == \9\q\w\w\s\o\p\j\a\c\5\5\1\w\k\3\c\a\g\j\h\5\0\s\r\3\k\e\a\m\9\c\m\p\f\0\0\a\6\l\m\h\m\k\6\t\z\6\0\0\i\h\8\o\e\4\e\1\p\w\p\f\7\6\l\a\9\h\0\s\x\y\l\s\a\7\o\j\x\3\z\l\t\c\l\2\0\a\k\4\p\h\k\z\a\3\u\p\s\a\k\s\g\i\w\z\y\f\8\j\q\c\0\u\c\t\z\c\s\8\t\5\q\1\n\y\n\r\d\s\1\f\8\r\w\q\d\g\b\6\q\4\k\o\n\6\u\t\s\4\q\g\g\r\e\o\j\f\e\y\7\2\7\i\r\6\1\o\r\s\m\7\b\p\o\t\b\8\h\3\7\8\p\4\l\6\5\t\b\w\6\o\c\z\b\v\q\u\4\9\7\n\j\5\y\u\d\m\9\l\g\j\n\n\z\5\z\d\a\j\p\o\6\8\4\n\2\z\l\j\i\z\v\5\x\t\v\n\d\j\u\l\o\q\f\4\e\n\m\n\i\m\d\h\l\l\n\i\z\r\b\z\u\1\i\k\l\3\k\6\d\b\t\2\2\8\2\1\8\d\l\m\j\v\w\1\n\2\p\5\v\u\z\i\m\p\6\8\p\x\4\p\g\x\h\c\5\w\7\9\6\1\h\x\j\6\f\8\n\6\3\c\d\n\v\4\n\5\2\1\i\m\z\4\4\n\t\7\f\p\8\t\z\h\s\4\o\k\b\i\6\n\9\p\n\f\c\q\l\9\j\q\t\q\g\o\i\t\1\7\h\7\c\u\x\z\4\h\m\f\g\d\f\m\o\x\z\x\a\f\9\u\z\2\w\w\n\1\a\s\t\0\n\u\i\i\7\o\4\e\8\n\z\v\c\9\8\7\x\z\9\d\h\1\1\r\5\9\r\g\j\v\r\b\h\w\g\g\h\e\j\a\r\x\s\0\1\k\x\z\5\u\v\k\7\v\s\e\n\h\3\6\8\h\y\4\t\j\f\w\3\s\l\r\k\d\k\4\b\a\x\1\u\i\d\y\t\t\7\x\i\n\s\u\j\q\i\f\r\v\s\h\8\v\w\b\o\6\x\i\f\7 ]] 00:08:19.842 01:28:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:19.842 01:28:28 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:19.842 [2024-11-17 01:28:28.248041] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:19.842 [2024-11-17 01:28:28.248225] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61893 ] 00:08:20.101 [2024-11-17 01:28:28.430889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.101 [2024-11-17 01:28:28.517584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:20.360 [2024-11-17 01:28:28.665264] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:20.361  [2024-11-17T01:28:29.764Z] Copying: 512/512 [B] (average 250 kBps) 00:08:21.305 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ 9qwwsopjac551wk3cagjh50sr3keam9cmpf00a6lmhmk6tz600ih8oe4e1pwpf76la9h0sxylsa7ojx3zltcl20ak4phkza3upsaksgiwzyf8jqc0uctzcs8t5q1nynrds1f8rwqdgb6q4kon6uts4qggreojfey727ir61orsm7bpotb8h378p4l65tbw6oczbvqu497nj5yudm9lgjnnz5zdajpo684n2zljizv5xtvndjuloqf4enmnimdhllnizrbzu1ikl3k6dbt228218dlmjvw1n2p5vuzimp68px4pgxhc5w7961hxj6f8n63cdnv4n521imz44nt7fp8tzhs4okbi6n9pnfcql9jqtqgoit17h7cuxz4hmfgdfmoxzxaf9uz2wwn1ast0nuii7o4e8nzvc987xz9dh11r59rgjvrbhwgghejarxs01kxz5uvk7vsenh368hy4tjfw3slrkdk4bax1uidytt7xinsujqifrvsh8vwbo6xif7 == \9\q\w\w\s\o\p\j\a\c\5\5\1\w\k\3\c\a\g\j\h\5\0\s\r\3\k\e\a\m\9\c\m\p\f\0\0\a\6\l\m\h\m\k\6\t\z\6\0\0\i\h\8\o\e\4\e\1\p\w\p\f\7\6\l\a\9\h\0\s\x\y\l\s\a\7\o\j\x\3\z\l\t\c\l\2\0\a\k\4\p\h\k\z\a\3\u\p\s\a\k\s\g\i\w\z\y\f\8\j\q\c\0\u\c\t\z\c\s\8\t\5\q\1\n\y\n\r\d\s\1\f\8\r\w\q\d\g\b\6\q\4\k\o\n\6\u\t\s\4\q\g\g\r\e\o\j\f\e\y\7\2\7\i\r\6\1\o\r\s\m\7\b\p\o\t\b\8\h\3\7\8\p\4\l\6\5\t\b\w\6\o\c\z\b\v\q\u\4\9\7\n\j\5\y\u\d\m\9\l\g\j\n\n\z\5\z\d\a\j\p\o\6\8\4\n\2\z\l\j\i\z\v\5\x\t\v\n\d\j\u\l\o\q\f\4\e\n\m\n\i\m\d\h\l\l\n\i\z\r\b\z\u\1\i\k\l\3\k\6\d\b\t\2\2\8\2\1\8\d\l\m\j\v\w\1\n\2\p\5\v\u\z\i\m\p\6\8\p\x\4\p\g\x\h\c\5\w\7\9\6\1\h\x\j\6\f\8\n\6\3\c\d\n\v\4\n\5\2\1\i\m\z\4\4\n\t\7\f\p\8\t\z\h\s\4\o\k\b\i\6\n\9\p\n\f\c\q\l\9\j\q\t\q\g\o\i\t\1\7\h\7\c\u\x\z\4\h\m\f\g\d\f\m\o\x\z\x\a\f\9\u\z\2\w\w\n\1\a\s\t\0\n\u\i\i\7\o\4\e\8\n\z\v\c\9\8\7\x\z\9\d\h\1\1\r\5\9\r\g\j\v\r\b\h\w\g\g\h\e\j\a\r\x\s\0\1\k\x\z\5\u\v\k\7\v\s\e\n\h\3\6\8\h\y\4\t\j\f\w\3\s\l\r\k\d\k\4\b\a\x\1\u\i\d\y\t\t\7\x\i\n\s\u\j\q\i\f\r\v\s\h\8\v\w\b\o\6\x\i\f\7 ]] 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@86 -- # gen_bytes 512 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/common.sh@98 -- # xtrace_disable 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:21.305 01:28:29 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:21.305 [2024-11-17 01:28:29.725464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:21.305 [2024-11-17 01:28:29.725626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61910 ] 00:08:21.564 [2024-11-17 01:28:29.899294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.564 [2024-11-17 01:28:29.983943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.822 [2024-11-17 01:28:30.139082] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:21.823  [2024-11-17T01:28:31.220Z] Copying: 512/512 [B] (average 500 kBps) 00:08:22.761 00:08:22.761 01:28:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kx1bq42ixnlqzcwaffocu3emagqpqmizuta0qjqjjts82k2myq3j4z9lpvaab5bis2698iuspy8y74wg1au4pd7wckfn214ktump2yp9wa840a51lvh3tls8qm33qng0g0qgcfrm37n6c8g3sg07rw81y3hq0xn3ou5g9wp883iz8j5tjvmd8dfchunhin26m72yl8dmzseya8oami8qim31uxbqy1nws8av37vomppq1tl8jbdx6n0urqb98qbi3n5vk9hfs43ql93wgwfhz1vy1jferk32npt294vg9h0nc09goxwbcjzpc758bexcm7rmn80gwe23cs8qzto3aqikdzk1ck6wspa3kt5y6otab7mdcw2qerery8p0di72qyl0vw6d22z7t1sucyl2dhgveu7no6qism01nvaci6b5q416tvcfsu5c1q1ytymk8w8i417e1m681wvwq1ngn280479v6rxirsl66e4ndry4yky5dci0ju3yqwyzefrd == \k\x\1\b\q\4\2\i\x\n\l\q\z\c\w\a\f\f\o\c\u\3\e\m\a\g\q\p\q\m\i\z\u\t\a\0\q\j\q\j\j\t\s\8\2\k\2\m\y\q\3\j\4\z\9\l\p\v\a\a\b\5\b\i\s\2\6\9\8\i\u\s\p\y\8\y\7\4\w\g\1\a\u\4\p\d\7\w\c\k\f\n\2\1\4\k\t\u\m\p\2\y\p\9\w\a\8\4\0\a\5\1\l\v\h\3\t\l\s\8\q\m\3\3\q\n\g\0\g\0\q\g\c\f\r\m\3\7\n\6\c\8\g\3\s\g\0\7\r\w\8\1\y\3\h\q\0\x\n\3\o\u\5\g\9\w\p\8\8\3\i\z\8\j\5\t\j\v\m\d\8\d\f\c\h\u\n\h\i\n\2\6\m\7\2\y\l\8\d\m\z\s\e\y\a\8\o\a\m\i\8\q\i\m\3\1\u\x\b\q\y\1\n\w\s\8\a\v\3\7\v\o\m\p\p\q\1\t\l\8\j\b\d\x\6\n\0\u\r\q\b\9\8\q\b\i\3\n\5\v\k\9\h\f\s\4\3\q\l\9\3\w\g\w\f\h\z\1\v\y\1\j\f\e\r\k\3\2\n\p\t\2\9\4\v\g\9\h\0\n\c\0\9\g\o\x\w\b\c\j\z\p\c\7\5\8\b\e\x\c\m\7\r\m\n\8\0\g\w\e\2\3\c\s\8\q\z\t\o\3\a\q\i\k\d\z\k\1\c\k\6\w\s\p\a\3\k\t\5\y\6\o\t\a\b\7\m\d\c\w\2\q\e\r\e\r\y\8\p\0\d\i\7\2\q\y\l\0\v\w\6\d\2\2\z\7\t\1\s\u\c\y\l\2\d\h\g\v\e\u\7\n\o\6\q\i\s\m\0\1\n\v\a\c\i\6\b\5\q\4\1\6\t\v\c\f\s\u\5\c\1\q\1\y\t\y\m\k\8\w\8\i\4\1\7\e\1\m\6\8\1\w\v\w\q\1\n\g\n\2\8\0\4\7\9\v\6\r\x\i\r\s\l\6\6\e\4\n\d\r\y\4\y\k\y\5\d\c\i\0\j\u\3\y\q\w\y\z\e\f\r\d ]] 00:08:22.761 01:28:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:22.761 01:28:31 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:22.761 [2024-11-17 01:28:31.173326] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:22.761 [2024-11-17 01:28:31.173473] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61932 ] 00:08:23.021 [2024-11-17 01:28:31.343353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.021 [2024-11-17 01:28:31.435092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.280 [2024-11-17 01:28:31.592124] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:23.280  [2024-11-17T01:28:32.674Z] Copying: 512/512 [B] (average 500 kBps) 00:08:24.215 00:08:24.215 01:28:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kx1bq42ixnlqzcwaffocu3emagqpqmizuta0qjqjjts82k2myq3j4z9lpvaab5bis2698iuspy8y74wg1au4pd7wckfn214ktump2yp9wa840a51lvh3tls8qm33qng0g0qgcfrm37n6c8g3sg07rw81y3hq0xn3ou5g9wp883iz8j5tjvmd8dfchunhin26m72yl8dmzseya8oami8qim31uxbqy1nws8av37vomppq1tl8jbdx6n0urqb98qbi3n5vk9hfs43ql93wgwfhz1vy1jferk32npt294vg9h0nc09goxwbcjzpc758bexcm7rmn80gwe23cs8qzto3aqikdzk1ck6wspa3kt5y6otab7mdcw2qerery8p0di72qyl0vw6d22z7t1sucyl2dhgveu7no6qism01nvaci6b5q416tvcfsu5c1q1ytymk8w8i417e1m681wvwq1ngn280479v6rxirsl66e4ndry4yky5dci0ju3yqwyzefrd == \k\x\1\b\q\4\2\i\x\n\l\q\z\c\w\a\f\f\o\c\u\3\e\m\a\g\q\p\q\m\i\z\u\t\a\0\q\j\q\j\j\t\s\8\2\k\2\m\y\q\3\j\4\z\9\l\p\v\a\a\b\5\b\i\s\2\6\9\8\i\u\s\p\y\8\y\7\4\w\g\1\a\u\4\p\d\7\w\c\k\f\n\2\1\4\k\t\u\m\p\2\y\p\9\w\a\8\4\0\a\5\1\l\v\h\3\t\l\s\8\q\m\3\3\q\n\g\0\g\0\q\g\c\f\r\m\3\7\n\6\c\8\g\3\s\g\0\7\r\w\8\1\y\3\h\q\0\x\n\3\o\u\5\g\9\w\p\8\8\3\i\z\8\j\5\t\j\v\m\d\8\d\f\c\h\u\n\h\i\n\2\6\m\7\2\y\l\8\d\m\z\s\e\y\a\8\o\a\m\i\8\q\i\m\3\1\u\x\b\q\y\1\n\w\s\8\a\v\3\7\v\o\m\p\p\q\1\t\l\8\j\b\d\x\6\n\0\u\r\q\b\9\8\q\b\i\3\n\5\v\k\9\h\f\s\4\3\q\l\9\3\w\g\w\f\h\z\1\v\y\1\j\f\e\r\k\3\2\n\p\t\2\9\4\v\g\9\h\0\n\c\0\9\g\o\x\w\b\c\j\z\p\c\7\5\8\b\e\x\c\m\7\r\m\n\8\0\g\w\e\2\3\c\s\8\q\z\t\o\3\a\q\i\k\d\z\k\1\c\k\6\w\s\p\a\3\k\t\5\y\6\o\t\a\b\7\m\d\c\w\2\q\e\r\e\r\y\8\p\0\d\i\7\2\q\y\l\0\v\w\6\d\2\2\z\7\t\1\s\u\c\y\l\2\d\h\g\v\e\u\7\n\o\6\q\i\s\m\0\1\n\v\a\c\i\6\b\5\q\4\1\6\t\v\c\f\s\u\5\c\1\q\1\y\t\y\m\k\8\w\8\i\4\1\7\e\1\m\6\8\1\w\v\w\q\1\n\g\n\2\8\0\4\7\9\v\6\r\x\i\r\s\l\6\6\e\4\n\d\r\y\4\y\k\y\5\d\c\i\0\j\u\3\y\q\w\y\z\e\f\r\d ]] 00:08:24.215 01:28:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:24.215 01:28:32 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:24.474 [2024-11-17 01:28:32.689950] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:24.474 [2024-11-17 01:28:32.690240] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61953 ] 00:08:24.474 [2024-11-17 01:28:32.882204] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.732 [2024-11-17 01:28:32.976740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.732 [2024-11-17 01:28:33.128654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:24.990  [2024-11-17T01:28:34.402Z] Copying: 512/512 [B] (average 250 kBps) 00:08:25.943 00:08:25.943 01:28:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kx1bq42ixnlqzcwaffocu3emagqpqmizuta0qjqjjts82k2myq3j4z9lpvaab5bis2698iuspy8y74wg1au4pd7wckfn214ktump2yp9wa840a51lvh3tls8qm33qng0g0qgcfrm37n6c8g3sg07rw81y3hq0xn3ou5g9wp883iz8j5tjvmd8dfchunhin26m72yl8dmzseya8oami8qim31uxbqy1nws8av37vomppq1tl8jbdx6n0urqb98qbi3n5vk9hfs43ql93wgwfhz1vy1jferk32npt294vg9h0nc09goxwbcjzpc758bexcm7rmn80gwe23cs8qzto3aqikdzk1ck6wspa3kt5y6otab7mdcw2qerery8p0di72qyl0vw6d22z7t1sucyl2dhgveu7no6qism01nvaci6b5q416tvcfsu5c1q1ytymk8w8i417e1m681wvwq1ngn280479v6rxirsl66e4ndry4yky5dci0ju3yqwyzefrd == \k\x\1\b\q\4\2\i\x\n\l\q\z\c\w\a\f\f\o\c\u\3\e\m\a\g\q\p\q\m\i\z\u\t\a\0\q\j\q\j\j\t\s\8\2\k\2\m\y\q\3\j\4\z\9\l\p\v\a\a\b\5\b\i\s\2\6\9\8\i\u\s\p\y\8\y\7\4\w\g\1\a\u\4\p\d\7\w\c\k\f\n\2\1\4\k\t\u\m\p\2\y\p\9\w\a\8\4\0\a\5\1\l\v\h\3\t\l\s\8\q\m\3\3\q\n\g\0\g\0\q\g\c\f\r\m\3\7\n\6\c\8\g\3\s\g\0\7\r\w\8\1\y\3\h\q\0\x\n\3\o\u\5\g\9\w\p\8\8\3\i\z\8\j\5\t\j\v\m\d\8\d\f\c\h\u\n\h\i\n\2\6\m\7\2\y\l\8\d\m\z\s\e\y\a\8\o\a\m\i\8\q\i\m\3\1\u\x\b\q\y\1\n\w\s\8\a\v\3\7\v\o\m\p\p\q\1\t\l\8\j\b\d\x\6\n\0\u\r\q\b\9\8\q\b\i\3\n\5\v\k\9\h\f\s\4\3\q\l\9\3\w\g\w\f\h\z\1\v\y\1\j\f\e\r\k\3\2\n\p\t\2\9\4\v\g\9\h\0\n\c\0\9\g\o\x\w\b\c\j\z\p\c\7\5\8\b\e\x\c\m\7\r\m\n\8\0\g\w\e\2\3\c\s\8\q\z\t\o\3\a\q\i\k\d\z\k\1\c\k\6\w\s\p\a\3\k\t\5\y\6\o\t\a\b\7\m\d\c\w\2\q\e\r\e\r\y\8\p\0\d\i\7\2\q\y\l\0\v\w\6\d\2\2\z\7\t\1\s\u\c\y\l\2\d\h\g\v\e\u\7\n\o\6\q\i\s\m\0\1\n\v\a\c\i\6\b\5\q\4\1\6\t\v\c\f\s\u\5\c\1\q\1\y\t\y\m\k\8\w\8\i\4\1\7\e\1\m\6\8\1\w\v\w\q\1\n\g\n\2\8\0\4\7\9\v\6\r\x\i\r\s\l\6\6\e\4\n\d\r\y\4\y\k\y\5\d\c\i\0\j\u\3\y\q\w\y\z\e\f\r\d ]] 00:08:25.943 01:28:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:25.943 01:28:34 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:25.943 [2024-11-17 01:28:34.217198] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:25.943 [2024-11-17 01:28:34.217410] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61975 ] 00:08:26.214 [2024-11-17 01:28:34.399129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.214 [2024-11-17 01:28:34.494851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.214 [2024-11-17 01:28:34.645703] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:26.472  [2024-11-17T01:28:35.868Z] Copying: 512/512 [B] (average 250 kBps) 00:08:27.409 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- dd/posix.sh@93 -- # [[ kx1bq42ixnlqzcwaffocu3emagqpqmizuta0qjqjjts82k2myq3j4z9lpvaab5bis2698iuspy8y74wg1au4pd7wckfn214ktump2yp9wa840a51lvh3tls8qm33qng0g0qgcfrm37n6c8g3sg07rw81y3hq0xn3ou5g9wp883iz8j5tjvmd8dfchunhin26m72yl8dmzseya8oami8qim31uxbqy1nws8av37vomppq1tl8jbdx6n0urqb98qbi3n5vk9hfs43ql93wgwfhz1vy1jferk32npt294vg9h0nc09goxwbcjzpc758bexcm7rmn80gwe23cs8qzto3aqikdzk1ck6wspa3kt5y6otab7mdcw2qerery8p0di72qyl0vw6d22z7t1sucyl2dhgveu7no6qism01nvaci6b5q416tvcfsu5c1q1ytymk8w8i417e1m681wvwq1ngn280479v6rxirsl66e4ndry4yky5dci0ju3yqwyzefrd == \k\x\1\b\q\4\2\i\x\n\l\q\z\c\w\a\f\f\o\c\u\3\e\m\a\g\q\p\q\m\i\z\u\t\a\0\q\j\q\j\j\t\s\8\2\k\2\m\y\q\3\j\4\z\9\l\p\v\a\a\b\5\b\i\s\2\6\9\8\i\u\s\p\y\8\y\7\4\w\g\1\a\u\4\p\d\7\w\c\k\f\n\2\1\4\k\t\u\m\p\2\y\p\9\w\a\8\4\0\a\5\1\l\v\h\3\t\l\s\8\q\m\3\3\q\n\g\0\g\0\q\g\c\f\r\m\3\7\n\6\c\8\g\3\s\g\0\7\r\w\8\1\y\3\h\q\0\x\n\3\o\u\5\g\9\w\p\8\8\3\i\z\8\j\5\t\j\v\m\d\8\d\f\c\h\u\n\h\i\n\2\6\m\7\2\y\l\8\d\m\z\s\e\y\a\8\o\a\m\i\8\q\i\m\3\1\u\x\b\q\y\1\n\w\s\8\a\v\3\7\v\o\m\p\p\q\1\t\l\8\j\b\d\x\6\n\0\u\r\q\b\9\8\q\b\i\3\n\5\v\k\9\h\f\s\4\3\q\l\9\3\w\g\w\f\h\z\1\v\y\1\j\f\e\r\k\3\2\n\p\t\2\9\4\v\g\9\h\0\n\c\0\9\g\o\x\w\b\c\j\z\p\c\7\5\8\b\e\x\c\m\7\r\m\n\8\0\g\w\e\2\3\c\s\8\q\z\t\o\3\a\q\i\k\d\z\k\1\c\k\6\w\s\p\a\3\k\t\5\y\6\o\t\a\b\7\m\d\c\w\2\q\e\r\e\r\y\8\p\0\d\i\7\2\q\y\l\0\v\w\6\d\2\2\z\7\t\1\s\u\c\y\l\2\d\h\g\v\e\u\7\n\o\6\q\i\s\m\0\1\n\v\a\c\i\6\b\5\q\4\1\6\t\v\c\f\s\u\5\c\1\q\1\y\t\y\m\k\8\w\8\i\4\1\7\e\1\m\6\8\1\w\v\w\q\1\n\g\n\2\8\0\4\7\9\v\6\r\x\i\r\s\l\6\6\e\4\n\d\r\y\4\y\k\y\5\d\c\i\0\j\u\3\y\q\w\y\z\e\f\r\d ]] 00:08:27.409 00:08:27.409 real 0m11.880s 00:08:27.409 user 0m9.562s 00:08:27.409 sys 0m6.498s 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flags_misc -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 ************************************ 00:08:27.409 END TEST dd_flags_misc 00:08:27.409 ************************************ 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@131 -- # tests_forced_aio 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@110 -- # printf '* Second test run%s\n' ', disabling liburing, forcing AIO' 00:08:27.409 * Second test run, disabling liburing, forcing AIO 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@113 -- # DD_APP+=("--aio") 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- dd/posix.sh@114 -- # run_test dd_flag_append_forced_aio append 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 ************************************ 00:08:27.409 START TEST dd_flag_append_forced_aio 00:08:27.409 ************************************ 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1129 -- # append 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@16 -- # local dump0 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@17 -- # local dump1 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # gen_bytes 32 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@19 -- # dump0=aqnd6xru7lk8u61sqor59497hf27slru 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # gen_bytes 32 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@20 -- # dump1=3p9tiqodsh4m7mz29gi4qh5eykaxo1cw 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@22 -- # printf %s aqnd6xru7lk8u61sqor59497hf27slru 00:08:27.409 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@23 -- # printf %s 3p9tiqodsh4m7mz29gi4qh5eykaxo1cw 00:08:27.410 01:28:35 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=append 00:08:27.410 [2024-11-17 01:28:35.724539] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:27.410 [2024-11-17 01:28:35.724682] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62015 ] 00:08:27.669 [2024-11-17 01:28:35.889205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.669 [2024-11-17 01:28:35.981098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.928 [2024-11-17 01:28:36.132528] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:27.928  [2024-11-17T01:28:37.325Z] Copying: 32/32 [B] (average 31 kBps) 00:08:28.866 00:08:28.866 ************************************ 00:08:28.866 END TEST dd_flag_append_forced_aio 00:08:28.866 ************************************ 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- dd/posix.sh@27 -- # [[ 3p9tiqodsh4m7mz29gi4qh5eykaxo1cwaqnd6xru7lk8u61sqor59497hf27slru == \3\p\9\t\i\q\o\d\s\h\4\m\7\m\z\2\9\g\i\4\q\h\5\e\y\k\a\x\o\1\c\w\a\q\n\d\6\x\r\u\7\l\k\8\u\6\1\s\q\o\r\5\9\4\9\7\h\f\2\7\s\l\r\u ]] 00:08:28.866 00:08:28.866 real 0m1.441s 00:08:28.866 user 0m1.165s 00:08:28.866 sys 0m0.156s 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_append_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix -- dd/posix.sh@115 -- # run_test dd_flag_directory_forced_aio directory 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:28.866 ************************************ 00:08:28.866 START TEST dd_flag_directory_forced_aio 00:08:28.866 ************************************ 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1129 -- # directory 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@31 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:28.866 01:28:37 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=directory --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:28.866 [2024-11-17 01:28:37.242942] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:28.866 [2024-11-17 01:28:37.243132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62054 ] 00:08:29.125 [2024-11-17 01:28:37.422691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.125 [2024-11-17 01:28:37.514180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.385 [2024-11-17 01:28:37.673498] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:29.385 [2024-11-17 01:28:37.756660] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:29.385 [2024-11-17 01:28:37.756741] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:29.385 [2024-11-17 01:28:37.756764] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:29.953 [2024-11-17 01:28:38.381340] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- dd/posix.sh@32 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:30.213 01:28:38 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=directory 00:08:30.472 [2024-11-17 01:28:38.726637] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:30.473 [2024-11-17 01:28:38.726798] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62070 ] 00:08:30.473 [2024-11-17 01:28:38.897112] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.731 [2024-11-17 01:28:38.990148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.731 [2024-11-17 01:28:39.144031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:30.989 [2024-11-17 01:28:39.232841] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:30.989 [2024-11-17 01:28:39.232946] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0: Not a directory 00:08:30.989 [2024-11-17 01:28:39.232970] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:31.556 [2024-11-17 01:28:39.839064] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@655 -- # es=236 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@664 -- # es=108 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:31.816 00:08:31.816 real 0m2.940s 00:08:31.816 user 0m2.357s 00:08:31.816 sys 0m0.365s 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:31.816 ************************************ 00:08:31.816 END TEST dd_flag_directory_forced_aio 00:08:31.816 ************************************ 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_directory_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix -- dd/posix.sh@116 -- # run_test dd_flag_nofollow_forced_aio nofollow 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:31.816 ************************************ 00:08:31.816 START TEST dd_flag_nofollow_forced_aio 00:08:31.816 ************************************ 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1129 -- # nofollow 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@36 -- # local test_file0_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@37 -- # local test_file1_link=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@39 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@40 -- # ln -fs /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@42 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:31.816 01:28:40 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --iflag=nofollow --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:31.816 [2024-11-17 01:28:40.250060] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:31.816 [2024-11-17 01:28:40.250241] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62116 ] 00:08:32.075 [2024-11-17 01:28:40.430264] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.075 [2024-11-17 01:28:40.513567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.334 [2024-11-17 01:28:40.664258] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:32.334 [2024-11-17 01:28:40.743860] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:32.334 [2024-11-17 01:28:40.743946] spdk_dd.c:1083:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link: Too many levels of symbolic links 00:08:32.334 [2024-11-17 01:28:40.743969] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:32.901 [2024-11-17 01:28:41.323353] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@43 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@652 -- # local es=0 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:08:33.161 01:28:41 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link --oflag=nofollow 00:08:33.420 [2024-11-17 01:28:41.668623] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:33.420 [2024-11-17 01:28:41.668818] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62132 ] 00:08:33.420 [2024-11-17 01:28:41.834907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.679 [2024-11-17 01:28:41.926827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.679 [2024-11-17 01:28:42.077469] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:33.940 [2024-11-17 01:28:42.158368] spdk_dd.c: 894:dd_open_file: *ERROR*: Could not open file /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:33.940 [2024-11-17 01:28:42.158447] spdk_dd.c:1132:dd_run: *ERROR*: /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link: Too many levels of symbolic links 00:08:33.940 [2024-11-17 01:28:42.158472] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:34.508 [2024-11-17 01:28:42.755546] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@655 -- # es=216 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@664 -- # es=88 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@665 -- # case "$es" in 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@672 -- # es=1 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@46 -- # gen_bytes 512 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:34.767 01:28:42 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@48 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:34.767 [2024-11-17 01:28:43.076537] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:34.767 [2024-11-17 01:28:43.076718] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62157 ] 00:08:35.026 [2024-11-17 01:28:43.246238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.026 [2024-11-17 01:28:43.326783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.026 [2024-11-17 01:28:43.476955] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:35.285  [2024-11-17T01:28:44.681Z] Copying: 512/512 [B] (average 500 kBps) 00:08:36.222 00:08:36.222 ************************************ 00:08:36.222 END TEST dd_flag_nofollow_forced_aio 00:08:36.222 ************************************ 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- dd/posix.sh@49 -- # [[ sb4d5qx5ecn07hlytjduea4z1zejdaur7co22iqt9w5ds9x6y66xz2mefmrzp8l33cexrtsjaci86ngsfkdiqmcbkldf6acepgw7or3fx4b0inu085r93hsm805nfdjvjfzl8v4em4a6qeh0xzleildqf8fen8qobdglselnd5p02x33j8ihaf0rf350kgayxzcge6lfy4m4ph6dwmc37y5tefesml0km4we7odsko47erdz5q0j0sl957v7rv7ris113e4c2yi1rhwr1szgebj1hl1w8gbnlqasslsuc98iss3xsb2edrbeggwq9wnrt03dhifld0z7ox5hursq68lcfne2p8cyhoelk6s393lwgs9ihjpx3109dgsh5k47foztmyht36ofd5223wofrtprlew6m63zo3x32u8s8anc7hvg0hng7fhue9qzy3nrj3tpq0abucgz3d4252pmrd4g7d0ca36abym6tzvfit10d7use6ehyqi1pejsifcd == \s\b\4\d\5\q\x\5\e\c\n\0\7\h\l\y\t\j\d\u\e\a\4\z\1\z\e\j\d\a\u\r\7\c\o\2\2\i\q\t\9\w\5\d\s\9\x\6\y\6\6\x\z\2\m\e\f\m\r\z\p\8\l\3\3\c\e\x\r\t\s\j\a\c\i\8\6\n\g\s\f\k\d\i\q\m\c\b\k\l\d\f\6\a\c\e\p\g\w\7\o\r\3\f\x\4\b\0\i\n\u\0\8\5\r\9\3\h\s\m\8\0\5\n\f\d\j\v\j\f\z\l\8\v\4\e\m\4\a\6\q\e\h\0\x\z\l\e\i\l\d\q\f\8\f\e\n\8\q\o\b\d\g\l\s\e\l\n\d\5\p\0\2\x\3\3\j\8\i\h\a\f\0\r\f\3\5\0\k\g\a\y\x\z\c\g\e\6\l\f\y\4\m\4\p\h\6\d\w\m\c\3\7\y\5\t\e\f\e\s\m\l\0\k\m\4\w\e\7\o\d\s\k\o\4\7\e\r\d\z\5\q\0\j\0\s\l\9\5\7\v\7\r\v\7\r\i\s\1\1\3\e\4\c\2\y\i\1\r\h\w\r\1\s\z\g\e\b\j\1\h\l\1\w\8\g\b\n\l\q\a\s\s\l\s\u\c\9\8\i\s\s\3\x\s\b\2\e\d\r\b\e\g\g\w\q\9\w\n\r\t\0\3\d\h\i\f\l\d\0\z\7\o\x\5\h\u\r\s\q\6\8\l\c\f\n\e\2\p\8\c\y\h\o\e\l\k\6\s\3\9\3\l\w\g\s\9\i\h\j\p\x\3\1\0\9\d\g\s\h\5\k\4\7\f\o\z\t\m\y\h\t\3\6\o\f\d\5\2\2\3\w\o\f\r\t\p\r\l\e\w\6\m\6\3\z\o\3\x\3\2\u\8\s\8\a\n\c\7\h\v\g\0\h\n\g\7\f\h\u\e\9\q\z\y\3\n\r\j\3\t\p\q\0\a\b\u\c\g\z\3\d\4\2\5\2\p\m\r\d\4\g\7\d\0\c\a\3\6\a\b\y\m\6\t\z\v\f\i\t\1\0\d\7\u\s\e\6\e\h\y\q\i\1\p\e\j\s\i\f\c\d ]] 00:08:36.222 00:08:36.222 real 0m4.245s 00:08:36.222 user 0m3.381s 00:08:36.222 sys 0m0.524s 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_nofollow_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix -- dd/posix.sh@117 -- # run_test dd_flag_noatime_forced_aio noatime 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:36.222 ************************************ 00:08:36.222 START TEST dd_flag_noatime_forced_aio 00:08:36.222 ************************************ 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1129 -- # noatime 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@53 -- # local atime_if 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@54 -- # local atime_of 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@58 -- # gen_bytes 512 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@60 -- # atime_if=1731806923 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@61 -- # atime_of=1731806924 00:08:36.222 01:28:44 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@66 -- # sleep 1 00:08:37.158 01:28:45 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=noatime --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:37.158 [2024-11-17 01:28:45.561213] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:37.158 [2024-11-17 01:28:45.561400] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62204 ] 00:08:37.417 [2024-11-17 01:28:45.740171] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.417 [2024-11-17 01:28:45.824665] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.676 [2024-11-17 01:28:45.981164] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:37.676  [2024-11-17T01:28:47.072Z] Copying: 512/512 [B] (average 500 kBps) 00:08:38.613 00:08:38.613 01:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:38.613 01:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@69 -- # (( atime_if == 1731806923 )) 00:08:38.613 01:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.613 01:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@70 -- # (( atime_of == 1731806924 )) 00:08:38.613 01:28:46 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:08:38.613 [2024-11-17 01:28:47.024005] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:38.613 [2024-11-17 01:28:47.024187] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62222 ] 00:08:38.871 [2024-11-17 01:28:47.202870] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.871 [2024-11-17 01:28:47.290642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:39.136 [2024-11-17 01:28:47.437863] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:39.136  [2024-11-17T01:28:48.572Z] Copying: 512/512 [B] (average 500 kBps) 00:08:40.113 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # stat --printf=%X /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:08:40.113 ************************************ 00:08:40.113 END TEST dd_flag_noatime_forced_aio 00:08:40.113 ************************************ 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- dd/posix.sh@73 -- # (( atime_if < 1731806927 )) 00:08:40.113 00:08:40.113 real 0m4.007s 00:08:40.113 user 0m2.365s 00:08:40.113 sys 0m0.397s 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flag_noatime_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix -- dd/posix.sh@118 -- # run_test dd_flags_misc_forced_aio io 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:40.113 ************************************ 00:08:40.113 START TEST dd_flags_misc_forced_aio 00:08:40.113 ************************************ 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1129 -- # io 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@77 -- # local flags_ro flags_rw flag_ro flag_rw 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@81 -- # flags_ro=(direct nonblock) 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@82 -- # flags_rw=("${flags_ro[@]}" sync dsync) 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:40.113 01:28:48 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:40.372 [2024-11-17 01:28:48.585740] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:40.372 [2024-11-17 01:28:48.586125] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62266 ] 00:08:40.372 [2024-11-17 01:28:48.750066] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.631 [2024-11-17 01:28:48.838693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.631 [2024-11-17 01:28:48.992575] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:40.631  [2024-11-17T01:28:50.028Z] Copying: 512/512 [B] (average 500 kBps) 00:08:41.569 00:08:41.569 01:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jysxfiun2u387ujbvbrrldcumczgcoutrxja7twtsd9l3heo77o2n2wnubxo5qwlx3meot7odt7vvj2ycf0vjvovvsjf4akq3zth2aj39vrat55bvgszdiu0tyc25kiz6c4t04i4mzqini8lcmpdog5spqbk8yjfxdnukiftje10dftvtvqhqbi4dze6mqmqbj808rgch1tokufrq556c2ou40ouymubsjo2kbmu62z0x27bnhhc87z2zw08nzbh34y6ig3yieisb4s9fod9hunf12u4g6cq6y38mjkjzhts7asjbr1v02jjzlwbkyhcaj0ldx0tf8wueua9gze835ke98iejwzqo6d2ys80we9mkjzm0twf22apg4rsamutdhgcatd1ch1xn72mmtc57t0ur9ue909wzitlpsm5k0szxbr0rf4za5l194oldtsvcrhlws0cz42tx3rzqwivxawc2x9imzbhta3nqu53g8w6q2tyzjdlb2atwnd9erpj == \j\y\s\x\f\i\u\n\2\u\3\8\7\u\j\b\v\b\r\r\l\d\c\u\m\c\z\g\c\o\u\t\r\x\j\a\7\t\w\t\s\d\9\l\3\h\e\o\7\7\o\2\n\2\w\n\u\b\x\o\5\q\w\l\x\3\m\e\o\t\7\o\d\t\7\v\v\j\2\y\c\f\0\v\j\v\o\v\v\s\j\f\4\a\k\q\3\z\t\h\2\a\j\3\9\v\r\a\t\5\5\b\v\g\s\z\d\i\u\0\t\y\c\2\5\k\i\z\6\c\4\t\0\4\i\4\m\z\q\i\n\i\8\l\c\m\p\d\o\g\5\s\p\q\b\k\8\y\j\f\x\d\n\u\k\i\f\t\j\e\1\0\d\f\t\v\t\v\q\h\q\b\i\4\d\z\e\6\m\q\m\q\b\j\8\0\8\r\g\c\h\1\t\o\k\u\f\r\q\5\5\6\c\2\o\u\4\0\o\u\y\m\u\b\s\j\o\2\k\b\m\u\6\2\z\0\x\2\7\b\n\h\h\c\8\7\z\2\z\w\0\8\n\z\b\h\3\4\y\6\i\g\3\y\i\e\i\s\b\4\s\9\f\o\d\9\h\u\n\f\1\2\u\4\g\6\c\q\6\y\3\8\m\j\k\j\z\h\t\s\7\a\s\j\b\r\1\v\0\2\j\j\z\l\w\b\k\y\h\c\a\j\0\l\d\x\0\t\f\8\w\u\e\u\a\9\g\z\e\8\3\5\k\e\9\8\i\e\j\w\z\q\o\6\d\2\y\s\8\0\w\e\9\m\k\j\z\m\0\t\w\f\2\2\a\p\g\4\r\s\a\m\u\t\d\h\g\c\a\t\d\1\c\h\1\x\n\7\2\m\m\t\c\5\7\t\0\u\r\9\u\e\9\0\9\w\z\i\t\l\p\s\m\5\k\0\s\z\x\b\r\0\r\f\4\z\a\5\l\1\9\4\o\l\d\t\s\v\c\r\h\l\w\s\0\c\z\4\2\t\x\3\r\z\q\w\i\v\x\a\w\c\2\x\9\i\m\z\b\h\t\a\3\n\q\u\5\3\g\8\w\6\q\2\t\y\z\j\d\l\b\2\a\t\w\n\d\9\e\r\p\j ]] 00:08:41.569 01:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:41.569 01:28:49 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:41.569 [2024-11-17 01:28:50.017385] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:41.569 [2024-11-17 01:28:50.017586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62280 ] 00:08:41.829 [2024-11-17 01:28:50.196645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.829 [2024-11-17 01:28:50.281087] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:42.088 [2024-11-17 01:28:50.439554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:42.088  [2024-11-17T01:28:51.484Z] Copying: 512/512 [B] (average 500 kBps) 00:08:43.025 00:08:43.025 01:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jysxfiun2u387ujbvbrrldcumczgcoutrxja7twtsd9l3heo77o2n2wnubxo5qwlx3meot7odt7vvj2ycf0vjvovvsjf4akq3zth2aj39vrat55bvgszdiu0tyc25kiz6c4t04i4mzqini8lcmpdog5spqbk8yjfxdnukiftje10dftvtvqhqbi4dze6mqmqbj808rgch1tokufrq556c2ou40ouymubsjo2kbmu62z0x27bnhhc87z2zw08nzbh34y6ig3yieisb4s9fod9hunf12u4g6cq6y38mjkjzhts7asjbr1v02jjzlwbkyhcaj0ldx0tf8wueua9gze835ke98iejwzqo6d2ys80we9mkjzm0twf22apg4rsamutdhgcatd1ch1xn72mmtc57t0ur9ue909wzitlpsm5k0szxbr0rf4za5l194oldtsvcrhlws0cz42tx3rzqwivxawc2x9imzbhta3nqu53g8w6q2tyzjdlb2atwnd9erpj == \j\y\s\x\f\i\u\n\2\u\3\8\7\u\j\b\v\b\r\r\l\d\c\u\m\c\z\g\c\o\u\t\r\x\j\a\7\t\w\t\s\d\9\l\3\h\e\o\7\7\o\2\n\2\w\n\u\b\x\o\5\q\w\l\x\3\m\e\o\t\7\o\d\t\7\v\v\j\2\y\c\f\0\v\j\v\o\v\v\s\j\f\4\a\k\q\3\z\t\h\2\a\j\3\9\v\r\a\t\5\5\b\v\g\s\z\d\i\u\0\t\y\c\2\5\k\i\z\6\c\4\t\0\4\i\4\m\z\q\i\n\i\8\l\c\m\p\d\o\g\5\s\p\q\b\k\8\y\j\f\x\d\n\u\k\i\f\t\j\e\1\0\d\f\t\v\t\v\q\h\q\b\i\4\d\z\e\6\m\q\m\q\b\j\8\0\8\r\g\c\h\1\t\o\k\u\f\r\q\5\5\6\c\2\o\u\4\0\o\u\y\m\u\b\s\j\o\2\k\b\m\u\6\2\z\0\x\2\7\b\n\h\h\c\8\7\z\2\z\w\0\8\n\z\b\h\3\4\y\6\i\g\3\y\i\e\i\s\b\4\s\9\f\o\d\9\h\u\n\f\1\2\u\4\g\6\c\q\6\y\3\8\m\j\k\j\z\h\t\s\7\a\s\j\b\r\1\v\0\2\j\j\z\l\w\b\k\y\h\c\a\j\0\l\d\x\0\t\f\8\w\u\e\u\a\9\g\z\e\8\3\5\k\e\9\8\i\e\j\w\z\q\o\6\d\2\y\s\8\0\w\e\9\m\k\j\z\m\0\t\w\f\2\2\a\p\g\4\r\s\a\m\u\t\d\h\g\c\a\t\d\1\c\h\1\x\n\7\2\m\m\t\c\5\7\t\0\u\r\9\u\e\9\0\9\w\z\i\t\l\p\s\m\5\k\0\s\z\x\b\r\0\r\f\4\z\a\5\l\1\9\4\o\l\d\t\s\v\c\r\h\l\w\s\0\c\z\4\2\t\x\3\r\z\q\w\i\v\x\a\w\c\2\x\9\i\m\z\b\h\t\a\3\n\q\u\5\3\g\8\w\6\q\2\t\y\z\j\d\l\b\2\a\t\w\n\d\9\e\r\p\j ]] 00:08:43.025 01:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:43.025 01:28:51 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:43.025 [2024-11-17 01:28:51.460484] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:43.025 [2024-11-17 01:28:51.460643] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62300 ] 00:08:43.285 [2024-11-17 01:28:51.638224] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:43.285 [2024-11-17 01:28:51.721809] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.544 [2024-11-17 01:28:51.876357] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:43.544  [2024-11-17T01:28:52.941Z] Copying: 512/512 [B] (average 250 kBps) 00:08:44.482 00:08:44.482 01:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jysxfiun2u387ujbvbrrldcumczgcoutrxja7twtsd9l3heo77o2n2wnubxo5qwlx3meot7odt7vvj2ycf0vjvovvsjf4akq3zth2aj39vrat55bvgszdiu0tyc25kiz6c4t04i4mzqini8lcmpdog5spqbk8yjfxdnukiftje10dftvtvqhqbi4dze6mqmqbj808rgch1tokufrq556c2ou40ouymubsjo2kbmu62z0x27bnhhc87z2zw08nzbh34y6ig3yieisb4s9fod9hunf12u4g6cq6y38mjkjzhts7asjbr1v02jjzlwbkyhcaj0ldx0tf8wueua9gze835ke98iejwzqo6d2ys80we9mkjzm0twf22apg4rsamutdhgcatd1ch1xn72mmtc57t0ur9ue909wzitlpsm5k0szxbr0rf4za5l194oldtsvcrhlws0cz42tx3rzqwivxawc2x9imzbhta3nqu53g8w6q2tyzjdlb2atwnd9erpj == \j\y\s\x\f\i\u\n\2\u\3\8\7\u\j\b\v\b\r\r\l\d\c\u\m\c\z\g\c\o\u\t\r\x\j\a\7\t\w\t\s\d\9\l\3\h\e\o\7\7\o\2\n\2\w\n\u\b\x\o\5\q\w\l\x\3\m\e\o\t\7\o\d\t\7\v\v\j\2\y\c\f\0\v\j\v\o\v\v\s\j\f\4\a\k\q\3\z\t\h\2\a\j\3\9\v\r\a\t\5\5\b\v\g\s\z\d\i\u\0\t\y\c\2\5\k\i\z\6\c\4\t\0\4\i\4\m\z\q\i\n\i\8\l\c\m\p\d\o\g\5\s\p\q\b\k\8\y\j\f\x\d\n\u\k\i\f\t\j\e\1\0\d\f\t\v\t\v\q\h\q\b\i\4\d\z\e\6\m\q\m\q\b\j\8\0\8\r\g\c\h\1\t\o\k\u\f\r\q\5\5\6\c\2\o\u\4\0\o\u\y\m\u\b\s\j\o\2\k\b\m\u\6\2\z\0\x\2\7\b\n\h\h\c\8\7\z\2\z\w\0\8\n\z\b\h\3\4\y\6\i\g\3\y\i\e\i\s\b\4\s\9\f\o\d\9\h\u\n\f\1\2\u\4\g\6\c\q\6\y\3\8\m\j\k\j\z\h\t\s\7\a\s\j\b\r\1\v\0\2\j\j\z\l\w\b\k\y\h\c\a\j\0\l\d\x\0\t\f\8\w\u\e\u\a\9\g\z\e\8\3\5\k\e\9\8\i\e\j\w\z\q\o\6\d\2\y\s\8\0\w\e\9\m\k\j\z\m\0\t\w\f\2\2\a\p\g\4\r\s\a\m\u\t\d\h\g\c\a\t\d\1\c\h\1\x\n\7\2\m\m\t\c\5\7\t\0\u\r\9\u\e\9\0\9\w\z\i\t\l\p\s\m\5\k\0\s\z\x\b\r\0\r\f\4\z\a\5\l\1\9\4\o\l\d\t\s\v\c\r\h\l\w\s\0\c\z\4\2\t\x\3\r\z\q\w\i\v\x\a\w\c\2\x\9\i\m\z\b\h\t\a\3\n\q\u\5\3\g\8\w\6\q\2\t\y\z\j\d\l\b\2\a\t\w\n\d\9\e\r\p\j ]] 00:08:44.482 01:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:44.482 01:28:52 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=direct --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:44.482 [2024-11-17 01:28:52.898421] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:44.483 [2024-11-17 01:28:52.898596] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62319 ] 00:08:44.742 [2024-11-17 01:28:53.074378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:44.742 [2024-11-17 01:28:53.156784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.001 [2024-11-17 01:28:53.317336] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:45.001  [2024-11-17T01:28:54.413Z] Copying: 512/512 [B] (average 500 kBps) 00:08:45.954 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ jysxfiun2u387ujbvbrrldcumczgcoutrxja7twtsd9l3heo77o2n2wnubxo5qwlx3meot7odt7vvj2ycf0vjvovvsjf4akq3zth2aj39vrat55bvgszdiu0tyc25kiz6c4t04i4mzqini8lcmpdog5spqbk8yjfxdnukiftje10dftvtvqhqbi4dze6mqmqbj808rgch1tokufrq556c2ou40ouymubsjo2kbmu62z0x27bnhhc87z2zw08nzbh34y6ig3yieisb4s9fod9hunf12u4g6cq6y38mjkjzhts7asjbr1v02jjzlwbkyhcaj0ldx0tf8wueua9gze835ke98iejwzqo6d2ys80we9mkjzm0twf22apg4rsamutdhgcatd1ch1xn72mmtc57t0ur9ue909wzitlpsm5k0szxbr0rf4za5l194oldtsvcrhlws0cz42tx3rzqwivxawc2x9imzbhta3nqu53g8w6q2tyzjdlb2atwnd9erpj == \j\y\s\x\f\i\u\n\2\u\3\8\7\u\j\b\v\b\r\r\l\d\c\u\m\c\z\g\c\o\u\t\r\x\j\a\7\t\w\t\s\d\9\l\3\h\e\o\7\7\o\2\n\2\w\n\u\b\x\o\5\q\w\l\x\3\m\e\o\t\7\o\d\t\7\v\v\j\2\y\c\f\0\v\j\v\o\v\v\s\j\f\4\a\k\q\3\z\t\h\2\a\j\3\9\v\r\a\t\5\5\b\v\g\s\z\d\i\u\0\t\y\c\2\5\k\i\z\6\c\4\t\0\4\i\4\m\z\q\i\n\i\8\l\c\m\p\d\o\g\5\s\p\q\b\k\8\y\j\f\x\d\n\u\k\i\f\t\j\e\1\0\d\f\t\v\t\v\q\h\q\b\i\4\d\z\e\6\m\q\m\q\b\j\8\0\8\r\g\c\h\1\t\o\k\u\f\r\q\5\5\6\c\2\o\u\4\0\o\u\y\m\u\b\s\j\o\2\k\b\m\u\6\2\z\0\x\2\7\b\n\h\h\c\8\7\z\2\z\w\0\8\n\z\b\h\3\4\y\6\i\g\3\y\i\e\i\s\b\4\s\9\f\o\d\9\h\u\n\f\1\2\u\4\g\6\c\q\6\y\3\8\m\j\k\j\z\h\t\s\7\a\s\j\b\r\1\v\0\2\j\j\z\l\w\b\k\y\h\c\a\j\0\l\d\x\0\t\f\8\w\u\e\u\a\9\g\z\e\8\3\5\k\e\9\8\i\e\j\w\z\q\o\6\d\2\y\s\8\0\w\e\9\m\k\j\z\m\0\t\w\f\2\2\a\p\g\4\r\s\a\m\u\t\d\h\g\c\a\t\d\1\c\h\1\x\n\7\2\m\m\t\c\5\7\t\0\u\r\9\u\e\9\0\9\w\z\i\t\l\p\s\m\5\k\0\s\z\x\b\r\0\r\f\4\z\a\5\l\1\9\4\o\l\d\t\s\v\c\r\h\l\w\s\0\c\z\4\2\t\x\3\r\z\q\w\i\v\x\a\w\c\2\x\9\i\m\z\b\h\t\a\3\n\q\u\5\3\g\8\w\6\q\2\t\y\z\j\d\l\b\2\a\t\w\n\d\9\e\r\p\j ]] 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@85 -- # for flag_ro in "${flags_ro[@]}" 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@86 -- # gen_bytes 512 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/common.sh@98 -- # xtrace_disable 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:45.954 01:28:54 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=direct 00:08:45.954 [2024-11-17 01:28:54.357674] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:45.954 [2024-11-17 01:28:54.358139] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62334 ] 00:08:46.214 [2024-11-17 01:28:54.538047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.214 [2024-11-17 01:28:54.620420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.472 [2024-11-17 01:28:54.780113] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:46.472  [2024-11-17T01:28:55.869Z] Copying: 512/512 [B] (average 500 kBps) 00:08:47.410 00:08:47.410 01:28:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ytelx57ebzp8ry2992e3x1kkf9978k5npv4a7fklaejmrbvr82s9xeo2msskx5a7fspomucg2c3hu73mrx5nkpnm3mydorcyj8h7e6ah1mkkba44vljbti4dns4lfz89x9h2fg34zocyg9h5n3h4uqxnqmidpofmuiu3glsa5zsijfhx00nmgqex95oyi61m1h8tcqy17v4ir2ytjkubllswccyp8db561tfymk5tm2lkg05oh6yy6jsx3xvlnzc0mj6xj7vjp1stxvd1a7e9bsavqvxxbjkggsdosxevy4k74yltkhtnwrqpnb5sdfse80y9722gfhfef6ccw10njj7xqidrsrdxkulyr00hxbu3fcr2kflxsejz68wlwnlnlw8xmurvqi2795rmwbs1l9kktvq5uj8sbpfmzoqekpo2ckh3cqd5k75l4m7b627xqhylqdbb4cil2rkve93ii5rm9o2v4ythh1vvkazf67u9u98mwi3rdlvinx1dgxw == \y\t\e\l\x\5\7\e\b\z\p\8\r\y\2\9\9\2\e\3\x\1\k\k\f\9\9\7\8\k\5\n\p\v\4\a\7\f\k\l\a\e\j\m\r\b\v\r\8\2\s\9\x\e\o\2\m\s\s\k\x\5\a\7\f\s\p\o\m\u\c\g\2\c\3\h\u\7\3\m\r\x\5\n\k\p\n\m\3\m\y\d\o\r\c\y\j\8\h\7\e\6\a\h\1\m\k\k\b\a\4\4\v\l\j\b\t\i\4\d\n\s\4\l\f\z\8\9\x\9\h\2\f\g\3\4\z\o\c\y\g\9\h\5\n\3\h\4\u\q\x\n\q\m\i\d\p\o\f\m\u\i\u\3\g\l\s\a\5\z\s\i\j\f\h\x\0\0\n\m\g\q\e\x\9\5\o\y\i\6\1\m\1\h\8\t\c\q\y\1\7\v\4\i\r\2\y\t\j\k\u\b\l\l\s\w\c\c\y\p\8\d\b\5\6\1\t\f\y\m\k\5\t\m\2\l\k\g\0\5\o\h\6\y\y\6\j\s\x\3\x\v\l\n\z\c\0\m\j\6\x\j\7\v\j\p\1\s\t\x\v\d\1\a\7\e\9\b\s\a\v\q\v\x\x\b\j\k\g\g\s\d\o\s\x\e\v\y\4\k\7\4\y\l\t\k\h\t\n\w\r\q\p\n\b\5\s\d\f\s\e\8\0\y\9\7\2\2\g\f\h\f\e\f\6\c\c\w\1\0\n\j\j\7\x\q\i\d\r\s\r\d\x\k\u\l\y\r\0\0\h\x\b\u\3\f\c\r\2\k\f\l\x\s\e\j\z\6\8\w\l\w\n\l\n\l\w\8\x\m\u\r\v\q\i\2\7\9\5\r\m\w\b\s\1\l\9\k\k\t\v\q\5\u\j\8\s\b\p\f\m\z\o\q\e\k\p\o\2\c\k\h\3\c\q\d\5\k\7\5\l\4\m\7\b\6\2\7\x\q\h\y\l\q\d\b\b\4\c\i\l\2\r\k\v\e\9\3\i\i\5\r\m\9\o\2\v\4\y\t\h\h\1\v\v\k\a\z\f\6\7\u\9\u\9\8\m\w\i\3\r\d\l\v\i\n\x\1\d\g\x\w ]] 00:08:47.410 01:28:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:47.410 01:28:55 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=nonblock 00:08:47.410 [2024-11-17 01:28:55.819455] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:47.410 [2024-11-17 01:28:55.819619] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62357 ] 00:08:47.670 [2024-11-17 01:28:55.983890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.670 [2024-11-17 01:28:56.066364] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.929 [2024-11-17 01:28:56.213918] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:47.929  [2024-11-17T01:28:57.326Z] Copying: 512/512 [B] (average 500 kBps) 00:08:48.867 00:08:48.867 01:28:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ytelx57ebzp8ry2992e3x1kkf9978k5npv4a7fklaejmrbvr82s9xeo2msskx5a7fspomucg2c3hu73mrx5nkpnm3mydorcyj8h7e6ah1mkkba44vljbti4dns4lfz89x9h2fg34zocyg9h5n3h4uqxnqmidpofmuiu3glsa5zsijfhx00nmgqex95oyi61m1h8tcqy17v4ir2ytjkubllswccyp8db561tfymk5tm2lkg05oh6yy6jsx3xvlnzc0mj6xj7vjp1stxvd1a7e9bsavqvxxbjkggsdosxevy4k74yltkhtnwrqpnb5sdfse80y9722gfhfef6ccw10njj7xqidrsrdxkulyr00hxbu3fcr2kflxsejz68wlwnlnlw8xmurvqi2795rmwbs1l9kktvq5uj8sbpfmzoqekpo2ckh3cqd5k75l4m7b627xqhylqdbb4cil2rkve93ii5rm9o2v4ythh1vvkazf67u9u98mwi3rdlvinx1dgxw == \y\t\e\l\x\5\7\e\b\z\p\8\r\y\2\9\9\2\e\3\x\1\k\k\f\9\9\7\8\k\5\n\p\v\4\a\7\f\k\l\a\e\j\m\r\b\v\r\8\2\s\9\x\e\o\2\m\s\s\k\x\5\a\7\f\s\p\o\m\u\c\g\2\c\3\h\u\7\3\m\r\x\5\n\k\p\n\m\3\m\y\d\o\r\c\y\j\8\h\7\e\6\a\h\1\m\k\k\b\a\4\4\v\l\j\b\t\i\4\d\n\s\4\l\f\z\8\9\x\9\h\2\f\g\3\4\z\o\c\y\g\9\h\5\n\3\h\4\u\q\x\n\q\m\i\d\p\o\f\m\u\i\u\3\g\l\s\a\5\z\s\i\j\f\h\x\0\0\n\m\g\q\e\x\9\5\o\y\i\6\1\m\1\h\8\t\c\q\y\1\7\v\4\i\r\2\y\t\j\k\u\b\l\l\s\w\c\c\y\p\8\d\b\5\6\1\t\f\y\m\k\5\t\m\2\l\k\g\0\5\o\h\6\y\y\6\j\s\x\3\x\v\l\n\z\c\0\m\j\6\x\j\7\v\j\p\1\s\t\x\v\d\1\a\7\e\9\b\s\a\v\q\v\x\x\b\j\k\g\g\s\d\o\s\x\e\v\y\4\k\7\4\y\l\t\k\h\t\n\w\r\q\p\n\b\5\s\d\f\s\e\8\0\y\9\7\2\2\g\f\h\f\e\f\6\c\c\w\1\0\n\j\j\7\x\q\i\d\r\s\r\d\x\k\u\l\y\r\0\0\h\x\b\u\3\f\c\r\2\k\f\l\x\s\e\j\z\6\8\w\l\w\n\l\n\l\w\8\x\m\u\r\v\q\i\2\7\9\5\r\m\w\b\s\1\l\9\k\k\t\v\q\5\u\j\8\s\b\p\f\m\z\o\q\e\k\p\o\2\c\k\h\3\c\q\d\5\k\7\5\l\4\m\7\b\6\2\7\x\q\h\y\l\q\d\b\b\4\c\i\l\2\r\k\v\e\9\3\i\i\5\r\m\9\o\2\v\4\y\t\h\h\1\v\v\k\a\z\f\6\7\u\9\u\9\8\m\w\i\3\r\d\l\v\i\n\x\1\d\g\x\w ]] 00:08:48.867 01:28:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:48.867 01:28:57 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=sync 00:08:48.867 [2024-11-17 01:28:57.251085] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:48.867 [2024-11-17 01:28:57.251283] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62372 ] 00:08:49.127 [2024-11-17 01:28:57.430265] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.127 [2024-11-17 01:28:57.523048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.385 [2024-11-17 01:28:57.683762] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:49.385  [2024-11-17T01:28:58.782Z] Copying: 512/512 [B] (average 250 kBps) 00:08:50.323 00:08:50.323 01:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ytelx57ebzp8ry2992e3x1kkf9978k5npv4a7fklaejmrbvr82s9xeo2msskx5a7fspomucg2c3hu73mrx5nkpnm3mydorcyj8h7e6ah1mkkba44vljbti4dns4lfz89x9h2fg34zocyg9h5n3h4uqxnqmidpofmuiu3glsa5zsijfhx00nmgqex95oyi61m1h8tcqy17v4ir2ytjkubllswccyp8db561tfymk5tm2lkg05oh6yy6jsx3xvlnzc0mj6xj7vjp1stxvd1a7e9bsavqvxxbjkggsdosxevy4k74yltkhtnwrqpnb5sdfse80y9722gfhfef6ccw10njj7xqidrsrdxkulyr00hxbu3fcr2kflxsejz68wlwnlnlw8xmurvqi2795rmwbs1l9kktvq5uj8sbpfmzoqekpo2ckh3cqd5k75l4m7b627xqhylqdbb4cil2rkve93ii5rm9o2v4ythh1vvkazf67u9u98mwi3rdlvinx1dgxw == \y\t\e\l\x\5\7\e\b\z\p\8\r\y\2\9\9\2\e\3\x\1\k\k\f\9\9\7\8\k\5\n\p\v\4\a\7\f\k\l\a\e\j\m\r\b\v\r\8\2\s\9\x\e\o\2\m\s\s\k\x\5\a\7\f\s\p\o\m\u\c\g\2\c\3\h\u\7\3\m\r\x\5\n\k\p\n\m\3\m\y\d\o\r\c\y\j\8\h\7\e\6\a\h\1\m\k\k\b\a\4\4\v\l\j\b\t\i\4\d\n\s\4\l\f\z\8\9\x\9\h\2\f\g\3\4\z\o\c\y\g\9\h\5\n\3\h\4\u\q\x\n\q\m\i\d\p\o\f\m\u\i\u\3\g\l\s\a\5\z\s\i\j\f\h\x\0\0\n\m\g\q\e\x\9\5\o\y\i\6\1\m\1\h\8\t\c\q\y\1\7\v\4\i\r\2\y\t\j\k\u\b\l\l\s\w\c\c\y\p\8\d\b\5\6\1\t\f\y\m\k\5\t\m\2\l\k\g\0\5\o\h\6\y\y\6\j\s\x\3\x\v\l\n\z\c\0\m\j\6\x\j\7\v\j\p\1\s\t\x\v\d\1\a\7\e\9\b\s\a\v\q\v\x\x\b\j\k\g\g\s\d\o\s\x\e\v\y\4\k\7\4\y\l\t\k\h\t\n\w\r\q\p\n\b\5\s\d\f\s\e\8\0\y\9\7\2\2\g\f\h\f\e\f\6\c\c\w\1\0\n\j\j\7\x\q\i\d\r\s\r\d\x\k\u\l\y\r\0\0\h\x\b\u\3\f\c\r\2\k\f\l\x\s\e\j\z\6\8\w\l\w\n\l\n\l\w\8\x\m\u\r\v\q\i\2\7\9\5\r\m\w\b\s\1\l\9\k\k\t\v\q\5\u\j\8\s\b\p\f\m\z\o\q\e\k\p\o\2\c\k\h\3\c\q\d\5\k\7\5\l\4\m\7\b\6\2\7\x\q\h\y\l\q\d\b\b\4\c\i\l\2\r\k\v\e\9\3\i\i\5\r\m\9\o\2\v\4\y\t\h\h\1\v\v\k\a\z\f\6\7\u\9\u\9\8\m\w\i\3\r\d\l\v\i\n\x\1\d\g\x\w ]] 00:08:50.323 01:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@87 -- # for flag_rw in "${flags_rw[@]}" 00:08:50.323 01:28:58 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@89 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --aio --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --iflag=nonblock --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=dsync 00:08:50.323 [2024-11-17 01:28:58.733300] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:50.323 [2024-11-17 01:28:58.733487] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62392 ] 00:08:50.582 [2024-11-17 01:28:58.911790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:50.582 [2024-11-17 01:28:59.003576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.842 [2024-11-17 01:28:59.153895] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:50.842  [2024-11-17T01:29:00.237Z] Copying: 512/512 [B] (average 250 kBps) 00:08:51.779 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- dd/posix.sh@93 -- # [[ ytelx57ebzp8ry2992e3x1kkf9978k5npv4a7fklaejmrbvr82s9xeo2msskx5a7fspomucg2c3hu73mrx5nkpnm3mydorcyj8h7e6ah1mkkba44vljbti4dns4lfz89x9h2fg34zocyg9h5n3h4uqxnqmidpofmuiu3glsa5zsijfhx00nmgqex95oyi61m1h8tcqy17v4ir2ytjkubllswccyp8db561tfymk5tm2lkg05oh6yy6jsx3xvlnzc0mj6xj7vjp1stxvd1a7e9bsavqvxxbjkggsdosxevy4k74yltkhtnwrqpnb5sdfse80y9722gfhfef6ccw10njj7xqidrsrdxkulyr00hxbu3fcr2kflxsejz68wlwnlnlw8xmurvqi2795rmwbs1l9kktvq5uj8sbpfmzoqekpo2ckh3cqd5k75l4m7b627xqhylqdbb4cil2rkve93ii5rm9o2v4ythh1vvkazf67u9u98mwi3rdlvinx1dgxw == \y\t\e\l\x\5\7\e\b\z\p\8\r\y\2\9\9\2\e\3\x\1\k\k\f\9\9\7\8\k\5\n\p\v\4\a\7\f\k\l\a\e\j\m\r\b\v\r\8\2\s\9\x\e\o\2\m\s\s\k\x\5\a\7\f\s\p\o\m\u\c\g\2\c\3\h\u\7\3\m\r\x\5\n\k\p\n\m\3\m\y\d\o\r\c\y\j\8\h\7\e\6\a\h\1\m\k\k\b\a\4\4\v\l\j\b\t\i\4\d\n\s\4\l\f\z\8\9\x\9\h\2\f\g\3\4\z\o\c\y\g\9\h\5\n\3\h\4\u\q\x\n\q\m\i\d\p\o\f\m\u\i\u\3\g\l\s\a\5\z\s\i\j\f\h\x\0\0\n\m\g\q\e\x\9\5\o\y\i\6\1\m\1\h\8\t\c\q\y\1\7\v\4\i\r\2\y\t\j\k\u\b\l\l\s\w\c\c\y\p\8\d\b\5\6\1\t\f\y\m\k\5\t\m\2\l\k\g\0\5\o\h\6\y\y\6\j\s\x\3\x\v\l\n\z\c\0\m\j\6\x\j\7\v\j\p\1\s\t\x\v\d\1\a\7\e\9\b\s\a\v\q\v\x\x\b\j\k\g\g\s\d\o\s\x\e\v\y\4\k\7\4\y\l\t\k\h\t\n\w\r\q\p\n\b\5\s\d\f\s\e\8\0\y\9\7\2\2\g\f\h\f\e\f\6\c\c\w\1\0\n\j\j\7\x\q\i\d\r\s\r\d\x\k\u\l\y\r\0\0\h\x\b\u\3\f\c\r\2\k\f\l\x\s\e\j\z\6\8\w\l\w\n\l\n\l\w\8\x\m\u\r\v\q\i\2\7\9\5\r\m\w\b\s\1\l\9\k\k\t\v\q\5\u\j\8\s\b\p\f\m\z\o\q\e\k\p\o\2\c\k\h\3\c\q\d\5\k\7\5\l\4\m\7\b\6\2\7\x\q\h\y\l\q\d\b\b\4\c\i\l\2\r\k\v\e\9\3\i\i\5\r\m\9\o\2\v\4\y\t\h\h\1\v\v\k\a\z\f\6\7\u\9\u\9\8\m\w\i\3\r\d\l\v\i\n\x\1\d\g\x\w ]] 00:08:51.779 00:08:51.779 real 0m11.594s 00:08:51.779 user 0m9.097s 00:08:51.779 sys 0m1.507s 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix.dd_flags_misc_forced_aio -- common/autotest_common.sh@10 -- # set +x 00:08:51.779 ************************************ 00:08:51.779 END TEST dd_flags_misc_forced_aio 00:08:51.779 ************************************ 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@1 -- # cleanup 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@11 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0.link 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix -- dd/posix.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1.link 00:08:51.779 ************************************ 00:08:51.779 END TEST spdk_dd_posix 00:08:51.779 ************************************ 00:08:51.779 00:08:51.779 real 0m49.662s 00:08:51.779 user 0m37.643s 00:08:51.779 sys 0m13.821s 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.779 01:29:00 spdk_dd.spdk_dd_posix -- common/autotest_common.sh@10 -- # set +x 00:08:51.779 01:29:00 spdk_dd -- dd/dd.sh@22 -- # run_test spdk_dd_malloc /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:51.779 01:29:00 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:51.779 01:29:00 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.779 01:29:00 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:08:51.779 ************************************ 00:08:51.779 START TEST spdk_dd_malloc 00:08:51.779 ************************************ 00:08:51.779 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/malloc.sh 00:08:52.038 * Looking for test storage... 00:08:52.038 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lcov --version 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # IFS=.-: 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@336 -- # read -ra ver1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # IFS=.-: 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@337 -- # read -ra ver2 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@338 -- # local 'op=<' 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@340 -- # ver1_l=2 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@341 -- # ver2_l=1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@344 -- # case "$op" in 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@345 -- # : 1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # decimal 1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:52.038 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # decimal 2 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@353 -- # local d=2 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@355 -- # echo 2 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@368 -- # return 0 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:52.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.039 --rc genhtml_branch_coverage=1 00:08:52.039 --rc genhtml_function_coverage=1 00:08:52.039 --rc genhtml_legend=1 00:08:52.039 --rc geninfo_all_blocks=1 00:08:52.039 --rc geninfo_unexecuted_blocks=1 00:08:52.039 00:08:52.039 ' 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:52.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.039 --rc genhtml_branch_coverage=1 00:08:52.039 --rc genhtml_function_coverage=1 00:08:52.039 --rc genhtml_legend=1 00:08:52.039 --rc geninfo_all_blocks=1 00:08:52.039 --rc geninfo_unexecuted_blocks=1 00:08:52.039 00:08:52.039 ' 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:52.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.039 --rc genhtml_branch_coverage=1 00:08:52.039 --rc genhtml_function_coverage=1 00:08:52.039 --rc genhtml_legend=1 00:08:52.039 --rc geninfo_all_blocks=1 00:08:52.039 --rc geninfo_unexecuted_blocks=1 00:08:52.039 00:08:52.039 ' 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:52.039 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:52.039 --rc genhtml_branch_coverage=1 00:08:52.039 --rc genhtml_function_coverage=1 00:08:52.039 --rc genhtml_legend=1 00:08:52.039 --rc geninfo_all_blocks=1 00:08:52.039 --rc geninfo_unexecuted_blocks=1 00:08:52.039 00:08:52.039 ' 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@5 -- # export PATH 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- dd/malloc.sh@38 -- # run_test dd_malloc_copy malloc_copy 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:08:52.039 ************************************ 00:08:52.039 START TEST dd_malloc_copy 00:08:52.039 ************************************ 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1129 -- # malloc_copy 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@12 -- # local mbdev0=malloc0 mbdev0_b=1048576 mbdev0_bs=512 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@13 -- # local mbdev1=malloc1 mbdev1_b=1048576 mbdev1_bs=512 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@15 -- # local -A method_bdev_malloc_create_0 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='1048576' ['block_size']='512') 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@21 -- # local -A method_bdev_malloc_create_1 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --json /dev/fd/62 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@28 -- # gen_conf 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:52.039 01:29:00 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:52.039 { 00:08:52.039 "subsystems": [ 00:08:52.039 { 00:08:52.039 "subsystem": "bdev", 00:08:52.039 "config": [ 00:08:52.039 { 00:08:52.039 "params": { 00:08:52.039 "block_size": 512, 00:08:52.039 "num_blocks": 1048576, 00:08:52.039 "name": "malloc0" 00:08:52.039 }, 00:08:52.039 "method": "bdev_malloc_create" 00:08:52.039 }, 00:08:52.039 { 00:08:52.039 "params": { 00:08:52.039 "block_size": 512, 00:08:52.039 "num_blocks": 1048576, 00:08:52.039 "name": "malloc1" 00:08:52.039 }, 00:08:52.039 "method": "bdev_malloc_create" 00:08:52.039 }, 00:08:52.039 { 00:08:52.039 "method": "bdev_wait_for_examine" 00:08:52.039 } 00:08:52.039 ] 00:08:52.039 } 00:08:52.039 ] 00:08:52.039 } 00:08:52.299 [2024-11-17 01:29:00.512739] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:52.299 [2024-11-17 01:29:00.513143] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62485 ] 00:08:52.299 [2024-11-17 01:29:00.691080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:52.558 [2024-11-17 01:29:00.782326] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.558 [2024-11-17 01:29:00.931582] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:08:54.466  [2024-11-17T01:29:03.864Z] Copying: 193/512 [MB] (193 MBps) [2024-11-17T01:29:04.801Z] Copying: 380/512 [MB] (186 MBps) [2024-11-17T01:29:07.337Z] Copying: 512/512 [MB] (average 188 MBps) 00:08:58.878 00:08:59.136 01:29:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc1 --ob=malloc0 --json /dev/fd/62 00:08:59.136 01:29:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/malloc.sh@33 -- # gen_conf 00:08:59.136 01:29:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- dd/common.sh@31 -- # xtrace_disable 00:08:59.136 01:29:07 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:08:59.136 { 00:08:59.136 "subsystems": [ 00:08:59.136 { 00:08:59.136 "subsystem": "bdev", 00:08:59.136 "config": [ 00:08:59.136 { 00:08:59.136 "params": { 00:08:59.136 "block_size": 512, 00:08:59.136 "num_blocks": 1048576, 00:08:59.136 "name": "malloc0" 00:08:59.136 }, 00:08:59.136 "method": "bdev_malloc_create" 00:08:59.136 }, 00:08:59.136 { 00:08:59.136 "params": { 00:08:59.136 "block_size": 512, 00:08:59.136 "num_blocks": 1048576, 00:08:59.136 "name": "malloc1" 00:08:59.136 }, 00:08:59.136 "method": "bdev_malloc_create" 00:08:59.136 }, 00:08:59.136 { 00:08:59.136 "method": "bdev_wait_for_examine" 00:08:59.136 } 00:08:59.136 ] 00:08:59.136 } 00:08:59.136 ] 00:08:59.136 } 00:08:59.136 [2024-11-17 01:29:07.458564] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:08:59.136 [2024-11-17 01:29:07.458749] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62562 ] 00:08:59.395 [2024-11-17 01:29:07.634587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.395 [2024-11-17 01:29:07.724749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.654 [2024-11-17 01:29:07.889196] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:01.555  [2024-11-17T01:29:10.950Z] Copying: 180/512 [MB] (180 MBps) [2024-11-17T01:29:11.886Z] Copying: 361/512 [MB] (181 MBps) [2024-11-17T01:29:15.173Z] Copying: 512/512 [MB] (average 181 MBps) 00:09:06.714 00:09:06.714 ************************************ 00:09:06.714 END TEST dd_malloc_copy 00:09:06.714 ************************************ 00:09:06.714 00:09:06.714 real 0m14.395s 00:09:06.714 user 0m13.357s 00:09:06.714 sys 0m0.871s 00:09:06.714 01:29:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.714 01:29:14 spdk_dd.spdk_dd_malloc.dd_malloc_copy -- common/autotest_common.sh@10 -- # set +x 00:09:06.714 ************************************ 00:09:06.714 END TEST spdk_dd_malloc 00:09:06.714 ************************************ 00:09:06.714 00:09:06.714 real 0m14.658s 00:09:06.714 user 0m13.499s 00:09:06.714 sys 0m0.990s 00:09:06.714 01:29:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.714 01:29:14 spdk_dd.spdk_dd_malloc -- common/autotest_common.sh@10 -- # set +x 00:09:06.714 01:29:14 spdk_dd -- dd/dd.sh@23 -- # run_test spdk_dd_bdev_to_bdev /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:06.714 01:29:14 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:06.714 01:29:14 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.714 01:29:14 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:06.714 ************************************ 00:09:06.714 START TEST spdk_dd_bdev_to_bdev 00:09:06.714 ************************************ 00:09:06.714 01:29:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/bdev_to_bdev.sh 0000:00:10.0 0000:00:11.0 00:09:06.714 * Looking for test storage... 00:09:06.714 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:06.714 01:29:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:06.714 01:29:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lcov --version 00:09:06.714 01:29:14 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # IFS=.-: 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@336 -- # read -ra ver1 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # IFS=.-: 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@337 -- # read -ra ver2 00:09:06.714 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@338 -- # local 'op=<' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@340 -- # ver1_l=2 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@341 -- # ver2_l=1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@344 -- # case "$op" in 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@345 -- # : 1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # decimal 1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@365 -- # ver1[v]=1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # decimal 2 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@353 -- # local d=2 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@355 -- # echo 2 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@366 -- # ver2[v]=2 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@368 -- # return 0 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.715 --rc genhtml_branch_coverage=1 00:09:06.715 --rc genhtml_function_coverage=1 00:09:06.715 --rc genhtml_legend=1 00:09:06.715 --rc geninfo_all_blocks=1 00:09:06.715 --rc geninfo_unexecuted_blocks=1 00:09:06.715 00:09:06.715 ' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.715 --rc genhtml_branch_coverage=1 00:09:06.715 --rc genhtml_function_coverage=1 00:09:06.715 --rc genhtml_legend=1 00:09:06.715 --rc geninfo_all_blocks=1 00:09:06.715 --rc geninfo_unexecuted_blocks=1 00:09:06.715 00:09:06.715 ' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.715 --rc genhtml_branch_coverage=1 00:09:06.715 --rc genhtml_function_coverage=1 00:09:06.715 --rc genhtml_legend=1 00:09:06.715 --rc geninfo_all_blocks=1 00:09:06.715 --rc geninfo_unexecuted_blocks=1 00:09:06.715 00:09:06.715 ' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:06.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:06.715 --rc genhtml_branch_coverage=1 00:09:06.715 --rc genhtml_function_coverage=1 00:09:06.715 --rc genhtml_legend=1 00:09:06.715 --rc geninfo_all_blocks=1 00:09:06.715 --rc geninfo_unexecuted_blocks=1 00:09:06.715 00:09:06.715 ' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@15 -- # shopt -s extglob 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@5 -- # export PATH 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@10 -- # nvmes=("$@") 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@47 -- # trap cleanup EXIT 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@49 -- # bs=1048576 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@51 -- # (( 2 > 1 )) 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0=Nvme0 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # bdev0=Nvme0n1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@52 -- # nvme0_pci=0000:00:10.0 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1=Nvme1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # bdev1=Nvme1n1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@53 -- # nvme1_pci=0000:00:11.0 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # method_bdev_nvme_attach_controller_0=(['name']='Nvme0' ['traddr']='0000:00:10.0' ['trtype']='pcie') 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@55 -- # declare -A method_bdev_nvme_attach_controller_0 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # method_bdev_nvme_attach_controller_1=(['name']='Nvme1' ['traddr']='0000:00:11.0' ['trtype']='pcie') 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@60 -- # declare -A method_bdev_nvme_attach_controller_1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@89 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@90 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@92 -- # magic='This Is Our Magic, find it' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@93 -- # echo 'This Is Our Magic, find it' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@96 -- # run_test dd_inflate_file /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:06.715 ************************************ 00:09:06.715 START TEST dd_inflate_file 00:09:06.715 ************************************ 00:09:06.715 01:29:15 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --oflag=append --bs=1048576 --count=64 00:09:06.974 [2024-11-17 01:29:15.172987] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:06.974 [2024-11-17 01:29:15.173421] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62725 ] 00:09:06.974 [2024-11-17 01:29:15.355443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.233 [2024-11-17 01:29:15.451339] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.233 [2024-11-17 01:29:15.617273] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:07.491  [2024-11-17T01:29:16.886Z] Copying: 64/64 [MB] (average 1777 MBps) 00:09:08.427 00:09:08.428 ************************************ 00:09:08.428 END TEST dd_inflate_file 00:09:08.428 ************************************ 00:09:08.428 00:09:08.428 real 0m1.574s 00:09:08.428 user 0m1.277s 00:09:08.428 sys 0m0.922s 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_inflate_file -- common/autotest_common.sh@10 -- # set +x 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # wc -c 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@104 -- # test_file0_size=67108891 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # run_test dd_copy_to_out_bdev /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@107 -- # gen_conf 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:08.428 ************************************ 00:09:08.428 START TEST dd_copy_to_out_bdev 00:09:08.428 ************************************ 00:09:08.428 01:29:16 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=Nvme0n1 --json /dev/fd/62 00:09:08.428 { 00:09:08.428 "subsystems": [ 00:09:08.428 { 00:09:08.428 "subsystem": "bdev", 00:09:08.428 "config": [ 00:09:08.428 { 00:09:08.428 "params": { 00:09:08.428 "trtype": "pcie", 00:09:08.428 "traddr": "0000:00:10.0", 00:09:08.428 "name": "Nvme0" 00:09:08.428 }, 00:09:08.428 "method": "bdev_nvme_attach_controller" 00:09:08.428 }, 00:09:08.428 { 00:09:08.428 "params": { 00:09:08.428 "trtype": "pcie", 00:09:08.428 "traddr": "0000:00:11.0", 00:09:08.428 "name": "Nvme1" 00:09:08.428 }, 00:09:08.428 "method": "bdev_nvme_attach_controller" 00:09:08.428 }, 00:09:08.428 { 00:09:08.428 "method": "bdev_wait_for_examine" 00:09:08.428 } 00:09:08.428 ] 00:09:08.428 } 00:09:08.428 ] 00:09:08.428 } 00:09:08.428 [2024-11-17 01:29:16.782945] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:08.428 [2024-11-17 01:29:16.783268] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62765 ] 00:09:08.686 [2024-11-17 01:29:16.948575] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:08.686 [2024-11-17 01:29:17.047736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:08.945 [2024-11-17 01:29:17.209205] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:10.405  [2024-11-17T01:29:18.864Z] Copying: 46/64 [MB] (46 MBps) [2024-11-17T01:29:19.800Z] Copying: 64/64 [MB] (average 45 MBps) 00:09:11.341 00:09:11.341 ************************************ 00:09:11.341 END TEST dd_copy_to_out_bdev 00:09:11.341 ************************************ 00:09:11.341 00:09:11.341 real 0m3.027s 00:09:11.341 user 0m2.757s 00:09:11.341 sys 0m2.267s 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_copy_to_out_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@113 -- # count=65 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@115 -- # run_test dd_offset_magic offset_magic 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:11.341 ************************************ 00:09:11.341 START TEST dd_offset_magic 00:09:11.341 ************************************ 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1129 -- # offset_magic 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@13 -- # local magic_check 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@14 -- # local offsets offset 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@16 -- # offsets=(16 64) 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=16 --bs=1048576 --json /dev/fd/62 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:11.341 01:29:19 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:11.601 { 00:09:11.601 "subsystems": [ 00:09:11.601 { 00:09:11.601 "subsystem": "bdev", 00:09:11.601 "config": [ 00:09:11.601 { 00:09:11.601 "params": { 00:09:11.601 "trtype": "pcie", 00:09:11.601 "traddr": "0000:00:10.0", 00:09:11.601 "name": "Nvme0" 00:09:11.601 }, 00:09:11.601 "method": "bdev_nvme_attach_controller" 00:09:11.601 }, 00:09:11.601 { 00:09:11.601 "params": { 00:09:11.601 "trtype": "pcie", 00:09:11.601 "traddr": "0000:00:11.0", 00:09:11.601 "name": "Nvme1" 00:09:11.601 }, 00:09:11.601 "method": "bdev_nvme_attach_controller" 00:09:11.601 }, 00:09:11.601 { 00:09:11.601 "method": "bdev_wait_for_examine" 00:09:11.601 } 00:09:11.601 ] 00:09:11.601 } 00:09:11.601 ] 00:09:11.601 } 00:09:11.601 [2024-11-17 01:29:19.862773] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:11.601 [2024-11-17 01:29:19.862956] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62822 ] 00:09:11.601 [2024-11-17 01:29:20.038922] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:11.860 [2024-11-17 01:29:20.122128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:11.860 [2024-11-17 01:29:20.282114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:12.429  [2024-11-17T01:29:21.456Z] Copying: 65/65 [MB] (average 955 MBps) 00:09:12.997 00:09:12.997 01:29:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=16 --bs=1048576 --json /dev/fd/62 00:09:12.997 01:29:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:12.997 01:29:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:12.997 01:29:21 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:12.997 { 00:09:12.997 "subsystems": [ 00:09:12.997 { 00:09:12.997 "subsystem": "bdev", 00:09:12.997 "config": [ 00:09:12.997 { 00:09:12.997 "params": { 00:09:12.997 "trtype": "pcie", 00:09:12.997 "traddr": "0000:00:10.0", 00:09:12.997 "name": "Nvme0" 00:09:12.997 }, 00:09:12.997 "method": "bdev_nvme_attach_controller" 00:09:12.997 }, 00:09:12.997 { 00:09:12.997 "params": { 00:09:12.997 "trtype": "pcie", 00:09:12.997 "traddr": "0000:00:11.0", 00:09:12.997 "name": "Nvme1" 00:09:12.997 }, 00:09:12.997 "method": "bdev_nvme_attach_controller" 00:09:12.997 }, 00:09:12.997 { 00:09:12.997 "method": "bdev_wait_for_examine" 00:09:12.997 } 00:09:12.997 ] 00:09:12.997 } 00:09:12.997 ] 00:09:12.997 } 00:09:13.256 [2024-11-17 01:29:21.480756] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:13.256 [2024-11-17 01:29:21.480943] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62855 ] 00:09:13.256 [2024-11-17 01:29:21.657914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:13.515 [2024-11-17 01:29:21.741107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:13.515 [2024-11-17 01:29:21.897057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:13.774  [2024-11-17T01:29:23.170Z] Copying: 1024/1024 [kB] (average 500 MBps) 00:09:14.711 00:09:14.712 01:29:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:14.712 01:29:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:14.712 01:29:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@18 -- # for offset in "${offsets[@]}" 00:09:14.712 01:29:22 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme0n1 --ob=Nvme1n1 --count=65 --seek=64 --bs=1048576 --json /dev/fd/62 00:09:14.712 01:29:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@20 -- # gen_conf 00:09:14.712 01:29:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:14.712 01:29:23 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:14.712 { 00:09:14.712 "subsystems": [ 00:09:14.712 { 00:09:14.712 "subsystem": "bdev", 00:09:14.712 "config": [ 00:09:14.712 { 00:09:14.712 "params": { 00:09:14.712 "trtype": "pcie", 00:09:14.712 "traddr": "0000:00:10.0", 00:09:14.712 "name": "Nvme0" 00:09:14.712 }, 00:09:14.712 "method": "bdev_nvme_attach_controller" 00:09:14.712 }, 00:09:14.712 { 00:09:14.712 "params": { 00:09:14.712 "trtype": "pcie", 00:09:14.712 "traddr": "0000:00:11.0", 00:09:14.712 "name": "Nvme1" 00:09:14.712 }, 00:09:14.712 "method": "bdev_nvme_attach_controller" 00:09:14.712 }, 00:09:14.712 { 00:09:14.712 "method": "bdev_wait_for_examine" 00:09:14.712 } 00:09:14.712 ] 00:09:14.712 } 00:09:14.712 ] 00:09:14.712 } 00:09:14.712 [2024-11-17 01:29:23.109003] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:14.712 [2024-11-17 01:29:23.109351] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62884 ] 00:09:14.970 [2024-11-17 01:29:23.289035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:14.970 [2024-11-17 01:29:23.376631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:15.229 [2024-11-17 01:29:23.546759] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:15.488  [2024-11-17T01:29:24.885Z] Copying: 65/65 [MB] (average 1083 MBps) 00:09:16.426 00:09:16.426 01:29:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # gen_conf 00:09:16.426 01:29:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=Nvme1n1 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=1 --skip=64 --bs=1048576 --json /dev/fd/62 00:09:16.426 01:29:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/common.sh@31 -- # xtrace_disable 00:09:16.426 01:29:24 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:16.426 { 00:09:16.426 "subsystems": [ 00:09:16.426 { 00:09:16.426 "subsystem": "bdev", 00:09:16.426 "config": [ 00:09:16.426 { 00:09:16.426 "params": { 00:09:16.426 "trtype": "pcie", 00:09:16.426 "traddr": "0000:00:10.0", 00:09:16.426 "name": "Nvme0" 00:09:16.426 }, 00:09:16.426 "method": "bdev_nvme_attach_controller" 00:09:16.426 }, 00:09:16.426 { 00:09:16.426 "params": { 00:09:16.426 "trtype": "pcie", 00:09:16.426 "traddr": "0000:00:11.0", 00:09:16.426 "name": "Nvme1" 00:09:16.426 }, 00:09:16.426 "method": "bdev_nvme_attach_controller" 00:09:16.426 }, 00:09:16.426 { 00:09:16.426 "method": "bdev_wait_for_examine" 00:09:16.426 } 00:09:16.426 ] 00:09:16.426 } 00:09:16.426 ] 00:09:16.426 } 00:09:16.426 [2024-11-17 01:29:24.678448] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:16.426 [2024-11-17 01:29:24.678626] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62910 ] 00:09:16.426 [2024-11-17 01:29:24.855051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:16.685 [2024-11-17 01:29:24.944578] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.685 [2024-11-17 01:29:25.094058] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:16.944  [2024-11-17T01:29:26.339Z] Copying: 1024/1024 [kB] (average 1000 MBps) 00:09:17.880 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@35 -- # read -rn26 magic_check 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- dd/bdev_to_bdev.sh@36 -- # [[ This Is Our Magic, find it == \T\h\i\s\ \I\s\ \O\u\r\ \M\a\g\i\c\,\ \f\i\n\d\ \i\t ]] 00:09:17.880 00:09:17.880 real 0m6.407s 00:09:17.880 user 0m5.370s 00:09:17.880 sys 0m2.126s 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev.dd_offset_magic -- common/autotest_common.sh@10 -- # set +x 00:09:17.880 ************************************ 00:09:17.880 END TEST dd_offset_magic 00:09:17.880 ************************************ 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@1 -- # cleanup 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@42 -- # clear_nvme Nvme0n1 '' 4194330 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme0n1 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme0n1 --count=5 --json /dev/fd/62 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:17.880 01:29:26 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:17.880 { 00:09:17.880 "subsystems": [ 00:09:17.880 { 00:09:17.880 "subsystem": "bdev", 00:09:17.880 "config": [ 00:09:17.880 { 00:09:17.880 "params": { 00:09:17.880 "trtype": "pcie", 00:09:17.880 "traddr": "0000:00:10.0", 00:09:17.880 "name": "Nvme0" 00:09:17.880 }, 00:09:17.880 "method": "bdev_nvme_attach_controller" 00:09:17.880 }, 00:09:17.880 { 00:09:17.880 "params": { 00:09:17.880 "trtype": "pcie", 00:09:17.880 "traddr": "0000:00:11.0", 00:09:17.880 "name": "Nvme1" 00:09:17.880 }, 00:09:17.880 "method": "bdev_nvme_attach_controller" 00:09:17.880 }, 00:09:17.880 { 00:09:17.880 "method": "bdev_wait_for_examine" 00:09:17.880 } 00:09:17.880 ] 00:09:17.880 } 00:09:17.880 ] 00:09:17.880 } 00:09:17.880 [2024-11-17 01:29:26.329997] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:17.880 [2024-11-17 01:29:26.330179] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62956 ] 00:09:18.139 [2024-11-17 01:29:26.505550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:18.139 [2024-11-17 01:29:26.592981] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.398 [2024-11-17 01:29:26.771004] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:18.657  [2024-11-17T01:29:28.052Z] Copying: 5120/5120 [kB] (average 1250 MBps) 00:09:19.593 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@43 -- # clear_nvme Nvme1n1 '' 4194330 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@10 -- # local bdev=Nvme1n1 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@11 -- # local nvme_ref= 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@12 -- # local size=4194330 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@14 -- # local bs=1048576 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@15 -- # local count=5 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --bs=1048576 --ob=Nvme1n1 --count=5 --json /dev/fd/62 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@18 -- # gen_conf 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:19.593 01:29:27 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:19.593 { 00:09:19.593 "subsystems": [ 00:09:19.593 { 00:09:19.593 "subsystem": "bdev", 00:09:19.593 "config": [ 00:09:19.593 { 00:09:19.593 "params": { 00:09:19.593 "trtype": "pcie", 00:09:19.593 "traddr": "0000:00:10.0", 00:09:19.593 "name": "Nvme0" 00:09:19.593 }, 00:09:19.593 "method": "bdev_nvme_attach_controller" 00:09:19.593 }, 00:09:19.593 { 00:09:19.593 "params": { 00:09:19.593 "trtype": "pcie", 00:09:19.593 "traddr": "0000:00:11.0", 00:09:19.593 "name": "Nvme1" 00:09:19.593 }, 00:09:19.593 "method": "bdev_nvme_attach_controller" 00:09:19.593 }, 00:09:19.593 { 00:09:19.593 "method": "bdev_wait_for_examine" 00:09:19.593 } 00:09:19.593 ] 00:09:19.593 } 00:09:19.593 ] 00:09:19.593 } 00:09:19.593 [2024-11-17 01:29:27.813145] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:19.593 [2024-11-17 01:29:27.813328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62979 ] 00:09:19.593 [2024-11-17 01:29:27.978608] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.853 [2024-11-17 01:29:28.074285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.853 [2024-11-17 01:29:28.227987] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:20.112  [2024-11-17T01:29:29.507Z] Copying: 5120/5120 [kB] (average 833 MBps) 00:09:21.048 00:09:21.048 01:29:29 spdk_dd.spdk_dd_bdev_to_bdev -- dd/bdev_to_bdev.sh@44 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 '' 00:09:21.048 ************************************ 00:09:21.048 END TEST spdk_dd_bdev_to_bdev 00:09:21.048 ************************************ 00:09:21.048 00:09:21.048 real 0m14.464s 00:09:21.048 user 0m12.181s 00:09:21.048 sys 0m7.016s 00:09:21.048 01:29:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.048 01:29:29 spdk_dd.spdk_dd_bdev_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:21.048 01:29:29 spdk_dd -- dd/dd.sh@24 -- # (( SPDK_TEST_URING == 1 )) 00:09:21.048 01:29:29 spdk_dd -- dd/dd.sh@25 -- # run_test spdk_dd_uring /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:21.048 01:29:29 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.048 01:29:29 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.048 01:29:29 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:21.048 ************************************ 00:09:21.048 START TEST spdk_dd_uring 00:09:21.048 ************************************ 00:09:21.048 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/uring.sh 00:09:21.048 * Looking for test storage... 00:09:21.048 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:21.048 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:21.048 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lcov --version 00:09:21.048 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:21.307 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:21.307 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.307 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.307 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.307 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.307 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@344 -- # case "$op" in 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@345 -- # : 1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # decimal 1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # decimal 2 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@353 -- # local d=2 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@355 -- # echo 2 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@368 -- # return 0 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:21.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.308 --rc genhtml_branch_coverage=1 00:09:21.308 --rc genhtml_function_coverage=1 00:09:21.308 --rc genhtml_legend=1 00:09:21.308 --rc geninfo_all_blocks=1 00:09:21.308 --rc geninfo_unexecuted_blocks=1 00:09:21.308 00:09:21.308 ' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:21.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.308 --rc genhtml_branch_coverage=1 00:09:21.308 --rc genhtml_function_coverage=1 00:09:21.308 --rc genhtml_legend=1 00:09:21.308 --rc geninfo_all_blocks=1 00:09:21.308 --rc geninfo_unexecuted_blocks=1 00:09:21.308 00:09:21.308 ' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:21.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.308 --rc genhtml_branch_coverage=1 00:09:21.308 --rc genhtml_function_coverage=1 00:09:21.308 --rc genhtml_legend=1 00:09:21.308 --rc geninfo_all_blocks=1 00:09:21.308 --rc geninfo_unexecuted_blocks=1 00:09:21.308 00:09:21.308 ' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:21.308 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.308 --rc genhtml_branch_coverage=1 00:09:21.308 --rc genhtml_function_coverage=1 00:09:21.308 --rc genhtml_legend=1 00:09:21.308 --rc geninfo_all_blocks=1 00:09:21.308 --rc geninfo_unexecuted_blocks=1 00:09:21.308 00:09:21.308 ' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- paths/export.sh@5 -- # export PATH 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- dd/uring.sh@103 -- # run_test dd_uring_copy uring_zram_copy 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:21.308 ************************************ 00:09:21.308 START TEST dd_uring_copy 00:09:21.308 ************************************ 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1129 -- # uring_zram_copy 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@15 -- # local zram_dev_id 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@16 -- # local magic 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@17 -- # local magic_file0=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@18 -- # local magic_file1=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@19 -- # local verify_magic 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@21 -- # init_zram 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@159 -- # [[ -e /sys/class/zram-control ]] 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@160 -- # return 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # create_zram_dev 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@164 -- # cat /sys/class/zram-control/hot_add 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@22 -- # zram_dev_id=1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@23 -- # set_zram_dev 1 512M 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@177 -- # local id=1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@178 -- # local size=512M 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@180 -- # [[ -e /sys/block/zram1 ]] 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@182 -- # echo 512M 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@25 -- # local ubdev=uring0 ufile=/dev/zram1 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # method_bdev_uring_create_0=(['filename']='/dev/zram1' ['name']='uring0') 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@27 -- # local -A method_bdev_uring_create_0 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@33 -- # local mbdev=malloc0 mbdev_b=1048576 mbdev_bs=512 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='1048576' ['block_size']='512') 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@35 -- # local -A method_bdev_malloc_create_0 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # gen_bytes 1024 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@98 -- # xtrace_disable 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:21.308 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@41 -- # magic=05ixuo58yqfvjzyugs7qpvyjjy1x0x889tvece71y8fgkpsh5m2xqz4pcv55pif23ouijg0bxo33kx9lj1pwejskdoukezux8nm00n0gaedhu08fwnx1acrulvu4u911c21wrbz8ly4aurueck1h0lnt37yvvlk89cngysxr1c7qyvlp08p24czs16qi6n5cviperl7hkc5bva2rk6ua9xsln3nuctd2nc7x2c9an44ndva57ioxr47m78zgtkhq5mt53jt0tdv52fwnr8cktx9s9zwyybiyl16vszxtelwc0pb518qh0v88xxbsru2r7jgcidv2zvw58kr5j0a278a6jwxdigovbz3am03cafjgeoh1vr7asl7hbrlkid0eyzmjmp0nbv6j42mhe8orq97y0gy0ohsmfa02qzwlg3vptltlx0jeekqzeh3p7cl6l4lamb0usoj7i4mjwrz9vi3g7lh3sgxw8pcpzh0jg9naym0qeu74t7sf8ls96sl5pdfz6x24d8t9mgu008a931tydx1xjih0aj3fag1f2fqvx74m1p4ftrd88bbbz0047fo517ylxwm1xynlwndb9hwfif4hp79w9kmyp1myy9vd09dk6rdk2lpo93iwdck46vtju6f1d8aur3w75vd2ahwkwdg3pnxixvbfhvqhkw2sath0uzcgp4lzpyyuduqroudcgdraf1fnsbqbqcvow6ovn7is7n2c8w7pvprf31qypdo16vkcchqecy1dh3cqsta3zf2dho9y3al6q60as144vvc5xxgif1vp9fjp8dq1zbs859jri5qxa943gqsnqy0kcwpey06gjcjxroban5kb2ygctl8oagzj53zkajpbpf2wze2nlkwndu96i65jlbhel0xu8gctvnl56tn1mnd4omofvyofdhzup1atyj9yy1z7y449o182b8e1uxt861hmskwxeeer65o588bxtge2yasqg5w3qfxmiza4173h0u83eioyw2utwg51s8rc 00:09:21.309 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@42 -- # echo 05ixuo58yqfvjzyugs7qpvyjjy1x0x889tvece71y8fgkpsh5m2xqz4pcv55pif23ouijg0bxo33kx9lj1pwejskdoukezux8nm00n0gaedhu08fwnx1acrulvu4u911c21wrbz8ly4aurueck1h0lnt37yvvlk89cngysxr1c7qyvlp08p24czs16qi6n5cviperl7hkc5bva2rk6ua9xsln3nuctd2nc7x2c9an44ndva57ioxr47m78zgtkhq5mt53jt0tdv52fwnr8cktx9s9zwyybiyl16vszxtelwc0pb518qh0v88xxbsru2r7jgcidv2zvw58kr5j0a278a6jwxdigovbz3am03cafjgeoh1vr7asl7hbrlkid0eyzmjmp0nbv6j42mhe8orq97y0gy0ohsmfa02qzwlg3vptltlx0jeekqzeh3p7cl6l4lamb0usoj7i4mjwrz9vi3g7lh3sgxw8pcpzh0jg9naym0qeu74t7sf8ls96sl5pdfz6x24d8t9mgu008a931tydx1xjih0aj3fag1f2fqvx74m1p4ftrd88bbbz0047fo517ylxwm1xynlwndb9hwfif4hp79w9kmyp1myy9vd09dk6rdk2lpo93iwdck46vtju6f1d8aur3w75vd2ahwkwdg3pnxixvbfhvqhkw2sath0uzcgp4lzpyyuduqroudcgdraf1fnsbqbqcvow6ovn7is7n2c8w7pvprf31qypdo16vkcchqecy1dh3cqsta3zf2dho9y3al6q60as144vvc5xxgif1vp9fjp8dq1zbs859jri5qxa943gqsnqy0kcwpey06gjcjxroban5kb2ygctl8oagzj53zkajpbpf2wze2nlkwndu96i65jlbhel0xu8gctvnl56tn1mnd4omofvyofdhzup1atyj9yy1z7y449o182b8e1uxt861hmskwxeeer65o588bxtge2yasqg5w3qfxmiza4173h0u83eioyw2utwg51s8rc 00:09:21.309 01:29:29 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/zero --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --oflag=append --bs=536869887 --count=1 00:09:21.309 [2024-11-17 01:29:29.714536] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:21.309 [2024-11-17 01:29:29.714907] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63068 ] 00:09:21.567 [2024-11-17 01:29:29.879856] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:21.567 [2024-11-17 01:29:29.967543] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:21.827 [2024-11-17 01:29:30.133377] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:22.764  [2024-11-17T01:29:33.129Z] Copying: 511/511 [MB] (average 1276 MBps) 00:09:24.670 00:09:24.670 01:29:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 --ob=uring0 --json /dev/fd/62 00:09:24.670 01:29:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@54 -- # gen_conf 00:09:24.670 01:29:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:24.670 01:29:32 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:24.670 { 00:09:24.670 "subsystems": [ 00:09:24.670 { 00:09:24.670 "subsystem": "bdev", 00:09:24.670 "config": [ 00:09:24.670 { 00:09:24.670 "params": { 00:09:24.670 "block_size": 512, 00:09:24.670 "num_blocks": 1048576, 00:09:24.670 "name": "malloc0" 00:09:24.670 }, 00:09:24.670 "method": "bdev_malloc_create" 00:09:24.670 }, 00:09:24.670 { 00:09:24.670 "params": { 00:09:24.670 "filename": "/dev/zram1", 00:09:24.670 "name": "uring0" 00:09:24.670 }, 00:09:24.670 "method": "bdev_uring_create" 00:09:24.670 }, 00:09:24.670 { 00:09:24.670 "method": "bdev_wait_for_examine" 00:09:24.670 } 00:09:24.670 ] 00:09:24.670 } 00:09:24.670 ] 00:09:24.670 } 00:09:24.670 [2024-11-17 01:29:32.959004] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:24.670 [2024-11-17 01:29:32.959204] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63107 ] 00:09:24.930 [2024-11-17 01:29:33.132744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:24.930 [2024-11-17 01:29:33.220614] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:24.930 [2024-11-17 01:29:33.380614] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:26.837  [2024-11-17T01:29:36.231Z] Copying: 215/512 [MB] (215 MBps) [2024-11-17T01:29:36.490Z] Copying: 418/512 [MB] (202 MBps) [2024-11-17T01:29:38.393Z] Copying: 512/512 [MB] (average 207 MBps) 00:09:29.934 00:09:29.934 01:29:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 --json /dev/fd/62 00:09:29.934 01:29:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@60 -- # gen_conf 00:09:29.934 01:29:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:29.934 01:29:38 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:29.934 { 00:09:29.934 "subsystems": [ 00:09:29.934 { 00:09:29.934 "subsystem": "bdev", 00:09:29.934 "config": [ 00:09:29.934 { 00:09:29.934 "params": { 00:09:29.934 "block_size": 512, 00:09:29.934 "num_blocks": 1048576, 00:09:29.934 "name": "malloc0" 00:09:29.934 }, 00:09:29.934 "method": "bdev_malloc_create" 00:09:29.934 }, 00:09:29.934 { 00:09:29.934 "params": { 00:09:29.934 "filename": "/dev/zram1", 00:09:29.934 "name": "uring0" 00:09:29.934 }, 00:09:29.934 "method": "bdev_uring_create" 00:09:29.934 }, 00:09:29.934 { 00:09:29.934 "method": "bdev_wait_for_examine" 00:09:29.934 } 00:09:29.934 ] 00:09:29.934 } 00:09:29.934 ] 00:09:29.934 } 00:09:30.193 [2024-11-17 01:29:38.416201] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:30.193 [2024-11-17 01:29:38.416384] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63180 ] 00:09:30.193 [2024-11-17 01:29:38.594476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:30.452 [2024-11-17 01:29:38.679112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.452 [2024-11-17 01:29:38.854118] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:32.356  [2024-11-17T01:29:41.382Z] Copying: 146/512 [MB] (146 MBps) [2024-11-17T01:29:42.757Z] Copying: 296/512 [MB] (149 MBps) [2024-11-17T01:29:43.015Z] Copying: 451/512 [MB] (154 MBps) [2024-11-17T01:29:44.940Z] Copying: 512/512 [MB] (average 149 MBps) 00:09:36.481 00:09:36.481 01:29:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@65 -- # read -rn1024 verify_magic 00:09:36.482 01:29:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@66 -- # [[ 05ixuo58yqfvjzyugs7qpvyjjy1x0x889tvece71y8fgkpsh5m2xqz4pcv55pif23ouijg0bxo33kx9lj1pwejskdoukezux8nm00n0gaedhu08fwnx1acrulvu4u911c21wrbz8ly4aurueck1h0lnt37yvvlk89cngysxr1c7qyvlp08p24czs16qi6n5cviperl7hkc5bva2rk6ua9xsln3nuctd2nc7x2c9an44ndva57ioxr47m78zgtkhq5mt53jt0tdv52fwnr8cktx9s9zwyybiyl16vszxtelwc0pb518qh0v88xxbsru2r7jgcidv2zvw58kr5j0a278a6jwxdigovbz3am03cafjgeoh1vr7asl7hbrlkid0eyzmjmp0nbv6j42mhe8orq97y0gy0ohsmfa02qzwlg3vptltlx0jeekqzeh3p7cl6l4lamb0usoj7i4mjwrz9vi3g7lh3sgxw8pcpzh0jg9naym0qeu74t7sf8ls96sl5pdfz6x24d8t9mgu008a931tydx1xjih0aj3fag1f2fqvx74m1p4ftrd88bbbz0047fo517ylxwm1xynlwndb9hwfif4hp79w9kmyp1myy9vd09dk6rdk2lpo93iwdck46vtju6f1d8aur3w75vd2ahwkwdg3pnxixvbfhvqhkw2sath0uzcgp4lzpyyuduqroudcgdraf1fnsbqbqcvow6ovn7is7n2c8w7pvprf31qypdo16vkcchqecy1dh3cqsta3zf2dho9y3al6q60as144vvc5xxgif1vp9fjp8dq1zbs859jri5qxa943gqsnqy0kcwpey06gjcjxroban5kb2ygctl8oagzj53zkajpbpf2wze2nlkwndu96i65jlbhel0xu8gctvnl56tn1mnd4omofvyofdhzup1atyj9yy1z7y449o182b8e1uxt861hmskwxeeer65o588bxtge2yasqg5w3qfxmiza4173h0u83eioyw2utwg51s8rc == \0\5\i\x\u\o\5\8\y\q\f\v\j\z\y\u\g\s\7\q\p\v\y\j\j\y\1\x\0\x\8\8\9\t\v\e\c\e\7\1\y\8\f\g\k\p\s\h\5\m\2\x\q\z\4\p\c\v\5\5\p\i\f\2\3\o\u\i\j\g\0\b\x\o\3\3\k\x\9\l\j\1\p\w\e\j\s\k\d\o\u\k\e\z\u\x\8\n\m\0\0\n\0\g\a\e\d\h\u\0\8\f\w\n\x\1\a\c\r\u\l\v\u\4\u\9\1\1\c\2\1\w\r\b\z\8\l\y\4\a\u\r\u\e\c\k\1\h\0\l\n\t\3\7\y\v\v\l\k\8\9\c\n\g\y\s\x\r\1\c\7\q\y\v\l\p\0\8\p\2\4\c\z\s\1\6\q\i\6\n\5\c\v\i\p\e\r\l\7\h\k\c\5\b\v\a\2\r\k\6\u\a\9\x\s\l\n\3\n\u\c\t\d\2\n\c\7\x\2\c\9\a\n\4\4\n\d\v\a\5\7\i\o\x\r\4\7\m\7\8\z\g\t\k\h\q\5\m\t\5\3\j\t\0\t\d\v\5\2\f\w\n\r\8\c\k\t\x\9\s\9\z\w\y\y\b\i\y\l\1\6\v\s\z\x\t\e\l\w\c\0\p\b\5\1\8\q\h\0\v\8\8\x\x\b\s\r\u\2\r\7\j\g\c\i\d\v\2\z\v\w\5\8\k\r\5\j\0\a\2\7\8\a\6\j\w\x\d\i\g\o\v\b\z\3\a\m\0\3\c\a\f\j\g\e\o\h\1\v\r\7\a\s\l\7\h\b\r\l\k\i\d\0\e\y\z\m\j\m\p\0\n\b\v\6\j\4\2\m\h\e\8\o\r\q\9\7\y\0\g\y\0\o\h\s\m\f\a\0\2\q\z\w\l\g\3\v\p\t\l\t\l\x\0\j\e\e\k\q\z\e\h\3\p\7\c\l\6\l\4\l\a\m\b\0\u\s\o\j\7\i\4\m\j\w\r\z\9\v\i\3\g\7\l\h\3\s\g\x\w\8\p\c\p\z\h\0\j\g\9\n\a\y\m\0\q\e\u\7\4\t\7\s\f\8\l\s\9\6\s\l\5\p\d\f\z\6\x\2\4\d\8\t\9\m\g\u\0\0\8\a\9\3\1\t\y\d\x\1\x\j\i\h\0\a\j\3\f\a\g\1\f\2\f\q\v\x\7\4\m\1\p\4\f\t\r\d\8\8\b\b\b\z\0\0\4\7\f\o\5\1\7\y\l\x\w\m\1\x\y\n\l\w\n\d\b\9\h\w\f\i\f\4\h\p\7\9\w\9\k\m\y\p\1\m\y\y\9\v\d\0\9\d\k\6\r\d\k\2\l\p\o\9\3\i\w\d\c\k\4\6\v\t\j\u\6\f\1\d\8\a\u\r\3\w\7\5\v\d\2\a\h\w\k\w\d\g\3\p\n\x\i\x\v\b\f\h\v\q\h\k\w\2\s\a\t\h\0\u\z\c\g\p\4\l\z\p\y\y\u\d\u\q\r\o\u\d\c\g\d\r\a\f\1\f\n\s\b\q\b\q\c\v\o\w\6\o\v\n\7\i\s\7\n\2\c\8\w\7\p\v\p\r\f\3\1\q\y\p\d\o\1\6\v\k\c\c\h\q\e\c\y\1\d\h\3\c\q\s\t\a\3\z\f\2\d\h\o\9\y\3\a\l\6\q\6\0\a\s\1\4\4\v\v\c\5\x\x\g\i\f\1\v\p\9\f\j\p\8\d\q\1\z\b\s\8\5\9\j\r\i\5\q\x\a\9\4\3\g\q\s\n\q\y\0\k\c\w\p\e\y\0\6\g\j\c\j\x\r\o\b\a\n\5\k\b\2\y\g\c\t\l\8\o\a\g\z\j\5\3\z\k\a\j\p\b\p\f\2\w\z\e\2\n\l\k\w\n\d\u\9\6\i\6\5\j\l\b\h\e\l\0\x\u\8\g\c\t\v\n\l\5\6\t\n\1\m\n\d\4\o\m\o\f\v\y\o\f\d\h\z\u\p\1\a\t\y\j\9\y\y\1\z\7\y\4\4\9\o\1\8\2\b\8\e\1\u\x\t\8\6\1\h\m\s\k\w\x\e\e\e\r\6\5\o\5\8\8\b\x\t\g\e\2\y\a\s\q\g\5\w\3\q\f\x\m\i\z\a\4\1\7\3\h\0\u\8\3\e\i\o\y\w\2\u\t\w\g\5\1\s\8\r\c ]] 00:09:36.482 01:29:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@68 -- # read -rn1024 verify_magic 00:09:36.482 01:29:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@69 -- # [[ 05ixuo58yqfvjzyugs7qpvyjjy1x0x889tvece71y8fgkpsh5m2xqz4pcv55pif23ouijg0bxo33kx9lj1pwejskdoukezux8nm00n0gaedhu08fwnx1acrulvu4u911c21wrbz8ly4aurueck1h0lnt37yvvlk89cngysxr1c7qyvlp08p24czs16qi6n5cviperl7hkc5bva2rk6ua9xsln3nuctd2nc7x2c9an44ndva57ioxr47m78zgtkhq5mt53jt0tdv52fwnr8cktx9s9zwyybiyl16vszxtelwc0pb518qh0v88xxbsru2r7jgcidv2zvw58kr5j0a278a6jwxdigovbz3am03cafjgeoh1vr7asl7hbrlkid0eyzmjmp0nbv6j42mhe8orq97y0gy0ohsmfa02qzwlg3vptltlx0jeekqzeh3p7cl6l4lamb0usoj7i4mjwrz9vi3g7lh3sgxw8pcpzh0jg9naym0qeu74t7sf8ls96sl5pdfz6x24d8t9mgu008a931tydx1xjih0aj3fag1f2fqvx74m1p4ftrd88bbbz0047fo517ylxwm1xynlwndb9hwfif4hp79w9kmyp1myy9vd09dk6rdk2lpo93iwdck46vtju6f1d8aur3w75vd2ahwkwdg3pnxixvbfhvqhkw2sath0uzcgp4lzpyyuduqroudcgdraf1fnsbqbqcvow6ovn7is7n2c8w7pvprf31qypdo16vkcchqecy1dh3cqsta3zf2dho9y3al6q60as144vvc5xxgif1vp9fjp8dq1zbs859jri5qxa943gqsnqy0kcwpey06gjcjxroban5kb2ygctl8oagzj53zkajpbpf2wze2nlkwndu96i65jlbhel0xu8gctvnl56tn1mnd4omofvyofdhzup1atyj9yy1z7y449o182b8e1uxt861hmskwxeeer65o588bxtge2yasqg5w3qfxmiza4173h0u83eioyw2utwg51s8rc == \0\5\i\x\u\o\5\8\y\q\f\v\j\z\y\u\g\s\7\q\p\v\y\j\j\y\1\x\0\x\8\8\9\t\v\e\c\e\7\1\y\8\f\g\k\p\s\h\5\m\2\x\q\z\4\p\c\v\5\5\p\i\f\2\3\o\u\i\j\g\0\b\x\o\3\3\k\x\9\l\j\1\p\w\e\j\s\k\d\o\u\k\e\z\u\x\8\n\m\0\0\n\0\g\a\e\d\h\u\0\8\f\w\n\x\1\a\c\r\u\l\v\u\4\u\9\1\1\c\2\1\w\r\b\z\8\l\y\4\a\u\r\u\e\c\k\1\h\0\l\n\t\3\7\y\v\v\l\k\8\9\c\n\g\y\s\x\r\1\c\7\q\y\v\l\p\0\8\p\2\4\c\z\s\1\6\q\i\6\n\5\c\v\i\p\e\r\l\7\h\k\c\5\b\v\a\2\r\k\6\u\a\9\x\s\l\n\3\n\u\c\t\d\2\n\c\7\x\2\c\9\a\n\4\4\n\d\v\a\5\7\i\o\x\r\4\7\m\7\8\z\g\t\k\h\q\5\m\t\5\3\j\t\0\t\d\v\5\2\f\w\n\r\8\c\k\t\x\9\s\9\z\w\y\y\b\i\y\l\1\6\v\s\z\x\t\e\l\w\c\0\p\b\5\1\8\q\h\0\v\8\8\x\x\b\s\r\u\2\r\7\j\g\c\i\d\v\2\z\v\w\5\8\k\r\5\j\0\a\2\7\8\a\6\j\w\x\d\i\g\o\v\b\z\3\a\m\0\3\c\a\f\j\g\e\o\h\1\v\r\7\a\s\l\7\h\b\r\l\k\i\d\0\e\y\z\m\j\m\p\0\n\b\v\6\j\4\2\m\h\e\8\o\r\q\9\7\y\0\g\y\0\o\h\s\m\f\a\0\2\q\z\w\l\g\3\v\p\t\l\t\l\x\0\j\e\e\k\q\z\e\h\3\p\7\c\l\6\l\4\l\a\m\b\0\u\s\o\j\7\i\4\m\j\w\r\z\9\v\i\3\g\7\l\h\3\s\g\x\w\8\p\c\p\z\h\0\j\g\9\n\a\y\m\0\q\e\u\7\4\t\7\s\f\8\l\s\9\6\s\l\5\p\d\f\z\6\x\2\4\d\8\t\9\m\g\u\0\0\8\a\9\3\1\t\y\d\x\1\x\j\i\h\0\a\j\3\f\a\g\1\f\2\f\q\v\x\7\4\m\1\p\4\f\t\r\d\8\8\b\b\b\z\0\0\4\7\f\o\5\1\7\y\l\x\w\m\1\x\y\n\l\w\n\d\b\9\h\w\f\i\f\4\h\p\7\9\w\9\k\m\y\p\1\m\y\y\9\v\d\0\9\d\k\6\r\d\k\2\l\p\o\9\3\i\w\d\c\k\4\6\v\t\j\u\6\f\1\d\8\a\u\r\3\w\7\5\v\d\2\a\h\w\k\w\d\g\3\p\n\x\i\x\v\b\f\h\v\q\h\k\w\2\s\a\t\h\0\u\z\c\g\p\4\l\z\p\y\y\u\d\u\q\r\o\u\d\c\g\d\r\a\f\1\f\n\s\b\q\b\q\c\v\o\w\6\o\v\n\7\i\s\7\n\2\c\8\w\7\p\v\p\r\f\3\1\q\y\p\d\o\1\6\v\k\c\c\h\q\e\c\y\1\d\h\3\c\q\s\t\a\3\z\f\2\d\h\o\9\y\3\a\l\6\q\6\0\a\s\1\4\4\v\v\c\5\x\x\g\i\f\1\v\p\9\f\j\p\8\d\q\1\z\b\s\8\5\9\j\r\i\5\q\x\a\9\4\3\g\q\s\n\q\y\0\k\c\w\p\e\y\0\6\g\j\c\j\x\r\o\b\a\n\5\k\b\2\y\g\c\t\l\8\o\a\g\z\j\5\3\z\k\a\j\p\b\p\f\2\w\z\e\2\n\l\k\w\n\d\u\9\6\i\6\5\j\l\b\h\e\l\0\x\u\8\g\c\t\v\n\l\5\6\t\n\1\m\n\d\4\o\m\o\f\v\y\o\f\d\h\z\u\p\1\a\t\y\j\9\y\y\1\z\7\y\4\4\9\o\1\8\2\b\8\e\1\u\x\t\8\6\1\h\m\s\k\w\x\e\e\e\r\6\5\o\5\8\8\b\x\t\g\e\2\y\a\s\q\g\5\w\3\q\f\x\m\i\z\a\4\1\7\3\h\0\u\8\3\e\i\o\y\w\2\u\t\w\g\5\1\s\8\r\c ]] 00:09:36.482 01:29:44 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@71 -- # diff -q /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:36.740 01:29:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --ob=malloc0 --json /dev/fd/62 00:09:36.740 01:29:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@75 -- # gen_conf 00:09:36.740 01:29:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:36.740 01:29:45 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:36.740 { 00:09:36.740 "subsystems": [ 00:09:36.740 { 00:09:36.740 "subsystem": "bdev", 00:09:36.740 "config": [ 00:09:36.740 { 00:09:36.740 "params": { 00:09:36.740 "block_size": 512, 00:09:36.740 "num_blocks": 1048576, 00:09:36.740 "name": "malloc0" 00:09:36.740 }, 00:09:36.740 "method": "bdev_malloc_create" 00:09:36.740 }, 00:09:36.740 { 00:09:36.740 "params": { 00:09:36.740 "filename": "/dev/zram1", 00:09:36.740 "name": "uring0" 00:09:36.740 }, 00:09:36.740 "method": "bdev_uring_create" 00:09:36.740 }, 00:09:36.740 { 00:09:36.740 "method": "bdev_wait_for_examine" 00:09:36.740 } 00:09:36.740 ] 00:09:36.740 } 00:09:36.740 ] 00:09:36.740 } 00:09:36.740 [2024-11-17 01:29:45.184478] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:36.740 [2024-11-17 01:29:45.184650] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63291 ] 00:09:36.999 [2024-11-17 01:29:45.362955] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.258 [2024-11-17 01:29:45.462037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.258 [2024-11-17 01:29:45.616485] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:39.164  [2024-11-17T01:29:48.192Z] Copying: 130/512 [MB] (130 MBps) [2024-11-17T01:29:49.570Z] Copying: 257/512 [MB] (126 MBps) [2024-11-17T01:29:50.137Z] Copying: 387/512 [MB] (130 MBps) [2024-11-17T01:29:52.042Z] Copying: 512/512 [MB] (average 129 MBps) 00:09:43.583 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # method_bdev_uring_delete_0=(['name']='uring0') 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@82 -- # local -A method_bdev_uring_delete_0 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # : 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/fd/62 --of=/dev/fd/61 --json /dev/fd/59 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@87 -- # gen_conf 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:43.583 01:29:52 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:43.842 { 00:09:43.842 "subsystems": [ 00:09:43.842 { 00:09:43.842 "subsystem": "bdev", 00:09:43.842 "config": [ 00:09:43.842 { 00:09:43.842 "params": { 00:09:43.842 "block_size": 512, 00:09:43.842 "num_blocks": 1048576, 00:09:43.842 "name": "malloc0" 00:09:43.842 }, 00:09:43.842 "method": "bdev_malloc_create" 00:09:43.842 }, 00:09:43.842 { 00:09:43.842 "params": { 00:09:43.842 "filename": "/dev/zram1", 00:09:43.842 "name": "uring0" 00:09:43.842 }, 00:09:43.842 "method": "bdev_uring_create" 00:09:43.842 }, 00:09:43.842 { 00:09:43.842 "params": { 00:09:43.842 "name": "uring0" 00:09:43.842 }, 00:09:43.842 "method": "bdev_uring_delete" 00:09:43.843 }, 00:09:43.843 { 00:09:43.843 "method": "bdev_wait_for_examine" 00:09:43.843 } 00:09:43.843 ] 00:09:43.843 } 00:09:43.843 ] 00:09:43.843 } 00:09:43.843 [2024-11-17 01:29:52.120240] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:43.843 [2024-11-17 01:29:52.120429] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63381 ] 00:09:43.843 [2024-11-17 01:29:52.299105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.102 [2024-11-17 01:29:52.392752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.102 [2024-11-17 01:29:52.552783] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:44.670  [2024-11-17T01:29:55.665Z] Copying: 0/0 [B] (average 0 Bps) 00:09:47.206 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # : 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@652 -- # local es=0 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@94 -- # gen_conf 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@31 -- # xtrace_disable 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:47.206 01:29:55 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=uring0 --of=/dev/fd/62 --json /dev/fd/61 00:09:47.206 { 00:09:47.206 "subsystems": [ 00:09:47.206 { 00:09:47.206 "subsystem": "bdev", 00:09:47.206 "config": [ 00:09:47.206 { 00:09:47.206 "params": { 00:09:47.206 "block_size": 512, 00:09:47.206 "num_blocks": 1048576, 00:09:47.206 "name": "malloc0" 00:09:47.206 }, 00:09:47.206 "method": "bdev_malloc_create" 00:09:47.206 }, 00:09:47.206 { 00:09:47.206 "params": { 00:09:47.206 "filename": "/dev/zram1", 00:09:47.206 "name": "uring0" 00:09:47.206 }, 00:09:47.206 "method": "bdev_uring_create" 00:09:47.206 }, 00:09:47.206 { 00:09:47.206 "params": { 00:09:47.206 "name": "uring0" 00:09:47.206 }, 00:09:47.206 "method": "bdev_uring_delete" 00:09:47.206 }, 00:09:47.206 { 00:09:47.206 "method": "bdev_wait_for_examine" 00:09:47.206 } 00:09:47.206 ] 00:09:47.206 } 00:09:47.206 ] 00:09:47.206 } 00:09:47.206 [2024-11-17 01:29:55.236156] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:47.206 [2024-11-17 01:29:55.236381] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63433 ] 00:09:47.206 [2024-11-17 01:29:55.415573] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:47.206 [2024-11-17 01:29:55.509145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.466 [2024-11-17 01:29:55.688443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:48.034 [2024-11-17 01:29:56.264130] bdev.c:8277:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: uring0 00:09:48.034 [2024-11-17 01:29:56.264219] spdk_dd.c: 933:dd_open_bdev: *ERROR*: Could not open bdev uring0: No such device 00:09:48.034 [2024-11-17 01:29:56.264240] spdk_dd.c:1090:dd_run: *ERROR*: uring0: No such device 00:09:48.034 [2024-11-17 01:29:56.264256] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:49.939 [2024-11-17 01:29:57.961062] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@655 -- # es=237 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@664 -- # es=109 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@665 -- # case "$es" in 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@672 -- # es=1 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@99 -- # remove_zram_dev 1 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@168 -- # local id=1 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@170 -- # [[ -e /sys/block/zram1 ]] 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@172 -- # echo 1 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/common.sh@173 -- # echo 1 00:09:49.939 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- dd/uring.sh@100 -- # rm -f /home/vagrant/spdk_repo/spdk/test/dd/magic.dump0 /home/vagrant/spdk_repo/spdk/test/dd/magic.dump1 00:09:50.198 00:09:50.198 real 0m28.952s 00:09:50.198 user 0m23.568s 00:09:50.198 sys 0m15.635s 00:09:50.199 ************************************ 00:09:50.199 END TEST dd_uring_copy 00:09:50.199 ************************************ 00:09:50.199 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.199 01:29:58 spdk_dd.spdk_dd_uring.dd_uring_copy -- common/autotest_common.sh@10 -- # set +x 00:09:50.199 ************************************ 00:09:50.199 END TEST spdk_dd_uring 00:09:50.199 ************************************ 00:09:50.199 00:09:50.199 real 0m29.186s 00:09:50.199 user 0m23.687s 00:09:50.199 sys 0m15.749s 00:09:50.199 01:29:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:50.199 01:29:58 spdk_dd.spdk_dd_uring -- common/autotest_common.sh@10 -- # set +x 00:09:50.199 01:29:58 spdk_dd -- dd/dd.sh@27 -- # run_test spdk_dd_sparse /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:50.199 01:29:58 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.199 01:29:58 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.199 01:29:58 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:50.199 ************************************ 00:09:50.199 START TEST spdk_dd_sparse 00:09:50.199 ************************************ 00:09:50.199 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/sparse.sh 00:09:50.459 * Looking for test storage... 00:09:50.459 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lcov --version 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # IFS=.-: 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@336 -- # read -ra ver1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # IFS=.-: 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@337 -- # read -ra ver2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@338 -- # local 'op=<' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@340 -- # ver1_l=2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@341 -- # ver2_l=1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@344 -- # case "$op" in 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@345 -- # : 1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # decimal 1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@365 -- # ver1[v]=1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # decimal 2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@353 -- # local d=2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@355 -- # echo 2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@366 -- # ver2[v]=2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@368 -- # return 0 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.459 --rc genhtml_branch_coverage=1 00:09:50.459 --rc genhtml_function_coverage=1 00:09:50.459 --rc genhtml_legend=1 00:09:50.459 --rc geninfo_all_blocks=1 00:09:50.459 --rc geninfo_unexecuted_blocks=1 00:09:50.459 00:09:50.459 ' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.459 --rc genhtml_branch_coverage=1 00:09:50.459 --rc genhtml_function_coverage=1 00:09:50.459 --rc genhtml_legend=1 00:09:50.459 --rc geninfo_all_blocks=1 00:09:50.459 --rc geninfo_unexecuted_blocks=1 00:09:50.459 00:09:50.459 ' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.459 --rc genhtml_branch_coverage=1 00:09:50.459 --rc genhtml_function_coverage=1 00:09:50.459 --rc genhtml_legend=1 00:09:50.459 --rc geninfo_all_blocks=1 00:09:50.459 --rc geninfo_unexecuted_blocks=1 00:09:50.459 00:09:50.459 ' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:50.459 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:50.459 --rc genhtml_branch_coverage=1 00:09:50.459 --rc genhtml_function_coverage=1 00:09:50.459 --rc genhtml_legend=1 00:09:50.459 --rc geninfo_all_blocks=1 00:09:50.459 --rc geninfo_unexecuted_blocks=1 00:09:50.459 00:09:50.459 ' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@15 -- # shopt -s extglob 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@5 -- # export PATH 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@108 -- # aio_disk=dd_sparse_aio_disk 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@109 -- # aio_bdev=dd_aio 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@110 -- # file1=file_zero1 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@111 -- # file2=file_zero2 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@112 -- # file3=file_zero3 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@113 -- # lvstore=dd_lvstore 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@114 -- # lvol=dd_lvol 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@116 -- # trap cleanup EXIT 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@118 -- # prepare 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@18 -- # truncate dd_sparse_aio_disk --size 104857600 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@20 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 00:09:50.459 1+0 records in 00:09:50.459 1+0 records out 00:09:50.459 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00700053 s, 599 MB/s 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@21 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=4 00:09:50.459 1+0 records in 00:09:50.459 1+0 records out 00:09:50.459 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00624313 s, 672 MB/s 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@22 -- # dd if=/dev/zero of=file_zero1 bs=4M count=1 seek=8 00:09:50.459 1+0 records in 00:09:50.459 1+0 records out 00:09:50.459 4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.00388347 s, 1.1 GB/s 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@120 -- # run_test dd_sparse_file_to_file file_to_file 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:50.459 ************************************ 00:09:50.459 START TEST dd_sparse_file_to_file 00:09:50.459 ************************************ 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1129 -- # file_to_file 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@26 -- # local stat1_s stat1_b 00:09:50.459 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@27 -- # local stat2_s stat2_b 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@29 -- # local -A method_bdev_aio_create_0 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # method_bdev_lvol_create_lvstore_1=(['bdev_name']='dd_aio' ['lvs_name']='dd_lvstore') 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@35 -- # local -A method_bdev_lvol_create_lvstore_1 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero1 --of=file_zero2 --bs=12582912 --sparse --json /dev/fd/62 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@41 -- # gen_conf 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:50.460 01:29:58 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:50.719 { 00:09:50.719 "subsystems": [ 00:09:50.719 { 00:09:50.719 "subsystem": "bdev", 00:09:50.719 "config": [ 00:09:50.719 { 00:09:50.719 "params": { 00:09:50.719 "block_size": 4096, 00:09:50.719 "filename": "dd_sparse_aio_disk", 00:09:50.719 "name": "dd_aio" 00:09:50.719 }, 00:09:50.719 "method": "bdev_aio_create" 00:09:50.719 }, 00:09:50.719 { 00:09:50.719 "params": { 00:09:50.719 "lvs_name": "dd_lvstore", 00:09:50.719 "bdev_name": "dd_aio" 00:09:50.719 }, 00:09:50.719 "method": "bdev_lvol_create_lvstore" 00:09:50.719 }, 00:09:50.719 { 00:09:50.719 "method": "bdev_wait_for_examine" 00:09:50.719 } 00:09:50.719 ] 00:09:50.719 } 00:09:50.719 ] 00:09:50.719 } 00:09:50.719 [2024-11-17 01:29:58.974794] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:50.719 [2024-11-17 01:29:58.975132] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63556 ] 00:09:50.719 [2024-11-17 01:29:59.145680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:50.978 [2024-11-17 01:29:59.236499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:50.978 [2024-11-17 01:29:59.397391] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:51.236  [2024-11-17T01:30:00.632Z] Copying: 12/36 [MB] (average 923 MBps) 00:09:52.173 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat --printf=%s file_zero1 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@47 -- # stat1_s=37748736 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat --printf=%s file_zero2 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@48 -- # stat2_s=37748736 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@50 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat --printf=%b file_zero1 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@52 -- # stat1_b=24576 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat --printf=%b file_zero2 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@53 -- # stat2_b=24576 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- dd/sparse.sh@55 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:52.173 00:09:52.173 real 0m1.692s 00:09:52.173 user 0m1.399s 00:09:52.173 sys 0m0.911s 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:52.173 ************************************ 00:09:52.173 END TEST dd_sparse_file_to_file 00:09:52.173 ************************************ 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@121 -- # run_test dd_sparse_file_to_bdev file_to_bdev 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:52.173 ************************************ 00:09:52.173 START TEST dd_sparse_file_to_bdev 00:09:52.173 ************************************ 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1129 -- # file_to_bdev 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@59 -- # local -A method_bdev_aio_create_0 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # method_bdev_lvol_create_1=(['lvs_name']='dd_lvstore' ['lvol_name']='dd_lvol' ['size_in_mib']='36' ['thin_provision']='true') 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@65 -- # local -A method_bdev_lvol_create_1 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=file_zero2 --ob=dd_lvstore/dd_lvol --bs=12582912 --sparse --json /dev/fd/62 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/sparse.sh@73 -- # gen_conf 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- dd/common.sh@31 -- # xtrace_disable 00:09:52.173 01:30:00 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:52.431 { 00:09:52.431 "subsystems": [ 00:09:52.431 { 00:09:52.431 "subsystem": "bdev", 00:09:52.431 "config": [ 00:09:52.431 { 00:09:52.431 "params": { 00:09:52.431 "block_size": 4096, 00:09:52.431 "filename": "dd_sparse_aio_disk", 00:09:52.431 "name": "dd_aio" 00:09:52.431 }, 00:09:52.431 "method": "bdev_aio_create" 00:09:52.431 }, 00:09:52.431 { 00:09:52.431 "params": { 00:09:52.431 "lvs_name": "dd_lvstore", 00:09:52.431 "lvol_name": "dd_lvol", 00:09:52.431 "size_in_mib": 36, 00:09:52.431 "thin_provision": true 00:09:52.431 }, 00:09:52.431 "method": "bdev_lvol_create" 00:09:52.431 }, 00:09:52.431 { 00:09:52.431 "method": "bdev_wait_for_examine" 00:09:52.431 } 00:09:52.431 ] 00:09:52.431 } 00:09:52.431 ] 00:09:52.431 } 00:09:52.431 [2024-11-17 01:30:00.728151] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:52.431 [2024-11-17 01:30:00.728332] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63610 ] 00:09:52.690 [2024-11-17 01:30:00.913869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.690 [2024-11-17 01:30:01.039138] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:52.948 [2024-11-17 01:30:01.233251] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:52.948  [2024-11-17T01:30:02.786Z] Copying: 12/36 [MB] (average 545 MBps) 00:09:54.327 00:09:54.327 00:09:54.327 real 0m1.770s 00:09:54.327 user 0m1.478s 00:09:54.327 sys 0m0.937s 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_file_to_bdev -- common/autotest_common.sh@10 -- # set +x 00:09:54.327 ************************************ 00:09:54.327 END TEST dd_sparse_file_to_bdev 00:09:54.327 ************************************ 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@122 -- # run_test dd_sparse_bdev_to_file bdev_to_file 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:54.327 ************************************ 00:09:54.327 START TEST dd_sparse_bdev_to_file 00:09:54.327 ************************************ 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1129 -- # bdev_to_file 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@81 -- # local stat2_s stat2_b 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@82 -- # local stat3_s stat3_b 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # method_bdev_aio_create_0=(['filename']='dd_sparse_aio_disk' ['name']='dd_aio' ['block_size']='4096') 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@84 -- # local -A method_bdev_aio_create_0 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=dd_lvstore/dd_lvol --of=file_zero3 --bs=12582912 --sparse --json /dev/fd/62 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@91 -- # gen_conf 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/common.sh@31 -- # xtrace_disable 00:09:54.327 01:30:02 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:54.327 { 00:09:54.327 "subsystems": [ 00:09:54.327 { 00:09:54.327 "subsystem": "bdev", 00:09:54.327 "config": [ 00:09:54.327 { 00:09:54.327 "params": { 00:09:54.327 "block_size": 4096, 00:09:54.327 "filename": "dd_sparse_aio_disk", 00:09:54.327 "name": "dd_aio" 00:09:54.327 }, 00:09:54.327 "method": "bdev_aio_create" 00:09:54.327 }, 00:09:54.327 { 00:09:54.327 "method": "bdev_wait_for_examine" 00:09:54.327 } 00:09:54.327 ] 00:09:54.327 } 00:09:54.327 ] 00:09:54.327 } 00:09:54.327 [2024-11-17 01:30:02.561829] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:54.327 [2024-11-17 01:30:02.562004] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63660 ] 00:09:54.327 [2024-11-17 01:30:02.747821] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.586 [2024-11-17 01:30:02.876893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.586 [2024-11-17 01:30:03.037087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:54.845  [2024-11-17T01:30:04.241Z] Copying: 12/36 [MB] (average 705 MBps) 00:09:55.782 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat --printf=%s file_zero2 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@97 -- # stat2_s=37748736 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat --printf=%s file_zero3 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@98 -- # stat3_s=37748736 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@100 -- # [[ 37748736 == \3\7\7\4\8\7\3\6 ]] 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat --printf=%b file_zero2 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@102 -- # stat2_b=24576 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat --printf=%b file_zero3 00:09:55.782 ************************************ 00:09:55.782 END TEST dd_sparse_bdev_to_file 00:09:55.782 ************************************ 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@103 -- # stat3_b=24576 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- dd/sparse.sh@105 -- # [[ 24576 == \2\4\5\7\6 ]] 00:09:55.782 00:09:55.782 real 0m1.693s 00:09:55.782 user 0m1.407s 00:09:55.782 sys 0m0.890s 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse.dd_sparse_bdev_to_file -- common/autotest_common.sh@10 -- # set +x 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@1 -- # cleanup 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@11 -- # rm dd_sparse_aio_disk 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@12 -- # rm file_zero1 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@13 -- # rm file_zero2 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- dd/sparse.sh@14 -- # rm file_zero3 00:09:55.782 ************************************ 00:09:55.782 END TEST spdk_dd_sparse 00:09:55.782 ************************************ 00:09:55.782 00:09:55.782 real 0m5.569s 00:09:55.782 user 0m4.463s 00:09:55.782 sys 0m2.961s 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.782 01:30:04 spdk_dd.spdk_dd_sparse -- common/autotest_common.sh@10 -- # set +x 00:09:56.042 01:30:04 spdk_dd -- dd/dd.sh@28 -- # run_test spdk_dd_negative /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:56.042 01:30:04 spdk_dd -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.042 01:30:04 spdk_dd -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.042 01:30:04 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:09:56.042 ************************************ 00:09:56.042 START TEST spdk_dd_negative 00:09:56.042 ************************************ 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dd/negative_dd.sh 00:09:56.042 * Looking for test storage... 00:09:56.042 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dd 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lcov --version 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # IFS=.-: 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@336 -- # read -ra ver1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # IFS=.-: 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@337 -- # read -ra ver2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@338 -- # local 'op=<' 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@340 -- # ver1_l=2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@341 -- # ver2_l=1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@344 -- # case "$op" in 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@345 -- # : 1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # decimal 1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@365 -- # ver1[v]=1 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # decimal 2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@353 -- # local d=2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@355 -- # echo 2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@366 -- # ver2[v]=2 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@368 -- # return 0 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:56.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.042 --rc genhtml_branch_coverage=1 00:09:56.042 --rc genhtml_function_coverage=1 00:09:56.042 --rc genhtml_legend=1 00:09:56.042 --rc geninfo_all_blocks=1 00:09:56.042 --rc geninfo_unexecuted_blocks=1 00:09:56.042 00:09:56.042 ' 00:09:56.042 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:56.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.042 --rc genhtml_branch_coverage=1 00:09:56.042 --rc genhtml_function_coverage=1 00:09:56.043 --rc genhtml_legend=1 00:09:56.043 --rc geninfo_all_blocks=1 00:09:56.043 --rc geninfo_unexecuted_blocks=1 00:09:56.043 00:09:56.043 ' 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.043 --rc genhtml_branch_coverage=1 00:09:56.043 --rc genhtml_function_coverage=1 00:09:56.043 --rc genhtml_legend=1 00:09:56.043 --rc geninfo_all_blocks=1 00:09:56.043 --rc geninfo_unexecuted_blocks=1 00:09:56.043 00:09:56.043 ' 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:56.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:56.043 --rc genhtml_branch_coverage=1 00:09:56.043 --rc genhtml_function_coverage=1 00:09:56.043 --rc genhtml_legend=1 00:09:56.043 --rc geninfo_all_blocks=1 00:09:56.043 --rc geninfo_unexecuted_blocks=1 00:09:56.043 00:09:56.043 ' 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@15 -- # shopt -s extglob 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- paths/export.sh@5 -- # export PATH 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@210 -- # test_file0=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@211 -- # test_file1=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@213 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@214 -- # touch /home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@216 -- # run_test dd_invalid_arguments invalid_arguments 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.043 ************************************ 00:09:56.043 START TEST dd_invalid_arguments 00:09:56.043 ************************************ 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1129 -- # invalid_arguments 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- dd/negative_dd.sh@12 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@652 -- # local es=0 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.043 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ii= --ob= 00:09:56.303 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd [options] 00:09:56.303 00:09:56.303 CPU options: 00:09:56.303 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:09:56.303 (like [0,1,10]) 00:09:56.303 --lcores lcore to CPU mapping list. The list is in the format: 00:09:56.303 [<,lcores[@CPUs]>...] 00:09:56.303 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:09:56.303 Within the group, '-' is used for range separator, 00:09:56.303 ',' is used for single number separator. 00:09:56.303 '( )' can be omitted for single element group, 00:09:56.303 '@' can be omitted if cpus and lcores have the same value 00:09:56.303 --disable-cpumask-locks Disable CPU core lock files. 00:09:56.303 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:09:56.303 pollers in the app support interrupt mode) 00:09:56.303 -p, --main-core main (primary) core for DPDK 00:09:56.303 00:09:56.303 Configuration options: 00:09:56.303 -c, --config, --json JSON config file 00:09:56.303 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:09:56.303 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:09:56.303 --wait-for-rpc wait for RPCs to initialize subsystems 00:09:56.303 --rpcs-allowed comma-separated list of permitted RPCS 00:09:56.303 --json-ignore-init-errors don't exit on invalid config entry 00:09:56.303 00:09:56.303 Memory options: 00:09:56.303 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:09:56.303 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:09:56.303 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:09:56.303 -R, --huge-unlink unlink huge files after initialization 00:09:56.303 -n, --mem-channels number of memory channels used for DPDK 00:09:56.303 -s, --mem-size memory size in MB for DPDK (default: 0MB) 00:09:56.303 --msg-mempool-size global message memory pool size in count (default: 262143) 00:09:56.303 --no-huge run without using hugepages 00:09:56.303 --enforce-numa enforce NUMA allocations from the specified NUMA node 00:09:56.303 -i, --shm-id shared memory ID (optional) 00:09:56.303 -g, --single-file-segments force creating just one hugetlbfs file 00:09:56.303 00:09:56.303 PCI options: 00:09:56.304 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:09:56.304 -B, --pci-blocked pci addr to block (can be used more than once) 00:09:56.304 -u, --no-pci disable PCI access 00:09:56.304 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:09:56.304 00:09:56.304 Log options: 00:09:56.304 -L, --logflag enable log flag (all, accel, accel_dsa, accel_iaa, accel_ioat, aio, 00:09:56.304 app_config, app_rpc, bdev, bdev_concat, bdev_ftl, bdev_malloc, 00:09:56.304 bdev_null, bdev_nvme, bdev_raid, bdev_raid0, bdev_raid1, bdev_raid_sb, 00:09:56.304 blob, blob_esnap, blob_rw, blobfs, blobfs_bdev, blobfs_bdev_rpc, 00:09:56.304 blobfs_rw, fsdev, fsdev_aio, ftl_core, ftl_init, fuse_dispatcher, 00:09:56.304 gpt_parse, idxd, ioat, iscsi_init, json_util, keyring, log_rpc, lvol, 00:09:56.304 lvol_rpc, notify_rpc, nvme, nvme_auth, nvme_cuse, nvme_vfio, opal, 00:09:56.304 reactor, rpc, rpc_client, scsi, sock, sock_posix, spdk_aio_mgr_io, 00:09:56.304 thread, trace, uring, vbdev_delay, vbdev_gpt, vbdev_lvol, vbdev_opal, 00:09:56.304 vbdev_passthru, vbdev_split, vbdev_zone_block, vfio_pci, vfio_user, vfu, 00:09:56.304 vfu_virtio, vfu_virtio_blk, vfu_virtio_fs, vfu_virtio_fs_data, 00:09:56.304 vfu_virtio_io, vfu_virtio_scsi, vfu_virtio_scsi_data, virtio, 00:09:56.304 virtio_blk, virtio_dev, virtio_pci, virtio_user, virtio_vfio_user, vmd) 00:09:56.304 --silence-noticelog disable notice level logging to stderr 00:09:56.304 00:09:56.304 Trace options: 00:09:56.304 --num-trace-entries number of trace entries for each core, must be power of 2, 00:09:56.304 /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd: unrecognized option '--ii=' 00:09:56.304 [2024-11-17 01:30:04.559710] spdk_dd.c:1480:main: *ERROR*: Invalid arguments 00:09:56.304 setting 0 to disable trace (default 32768) 00:09:56.304 Tracepoints vary in size and can use more than one trace entry. 00:09:56.304 -e, --tpoint-group [:] 00:09:56.304 group_name - tracepoint group name for spdk trace buffers (scsi, bdev, 00:09:56.304 ftl, blobfs, dsa, thread, nvme_pcie, iaa, nvme_tcp, bdev_nvme, sock, 00:09:56.304 blob, bdev_raid, scheduler, all). 00:09:56.304 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:09:56.304 a tracepoint group. First tpoint inside a group can be enabled by 00:09:56.304 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:09:56.304 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:09:56.304 in /include/spdk_internal/trace_defs.h 00:09:56.304 00:09:56.304 Other options: 00:09:56.304 -h, --help show this usage 00:09:56.304 -v, --version print SPDK version 00:09:56.304 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:09:56.304 --env-context Opaque context for use of the env implementation 00:09:56.304 00:09:56.304 Application specific: 00:09:56.304 [--------- DD Options ---------] 00:09:56.304 --if Input file. Must specify either --if or --ib. 00:09:56.304 --ib Input bdev. Must specifier either --if or --ib 00:09:56.304 --of Output file. Must specify either --of or --ob. 00:09:56.304 --ob Output bdev. Must specify either --of or --ob. 00:09:56.304 --iflag Input file flags. 00:09:56.304 --oflag Output file flags. 00:09:56.304 --bs I/O unit size (default: 4096) 00:09:56.304 --qd Queue depth (default: 2) 00:09:56.304 --count I/O unit count. The number of I/O units to copy. (default: all) 00:09:56.304 --skip Skip this many I/O units at start of input. (default: 0) 00:09:56.304 --seek Skip this many I/O units at start of output. (default: 0) 00:09:56.304 --aio Force usage of AIO. (by default io_uring is used if available) 00:09:56.304 --sparse Enable hole skipping in input target 00:09:56.304 Available iflag and oflag values: 00:09:56.304 append - append mode 00:09:56.304 direct - use direct I/O for data 00:09:56.304 directory - fail unless a directory 00:09:56.304 dsync - use synchronized I/O for data 00:09:56.304 noatime - do not update access time 00:09:56.304 noctty - do not assign controlling terminal from file 00:09:56.304 nofollow - do not follow symlinks 00:09:56.304 nonblock - use non-blocking I/O 00:09:56.304 sync - use synchronized I/O for data and metadata 00:09:56.304 ************************************ 00:09:56.304 END TEST dd_invalid_arguments 00:09:56.304 ************************************ 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@655 -- # es=2 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.304 00:09:56.304 real 0m0.141s 00:09:56.304 user 0m0.077s 00:09:56.304 sys 0m0.063s 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_invalid_arguments -- common/autotest_common.sh@10 -- # set +x 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@217 -- # run_test dd_double_input double_input 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.304 ************************************ 00:09:56.304 START TEST dd_double_input 00:09:56.304 ************************************ 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1129 -- # double_input 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- dd/negative_dd.sh@19 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@652 -- # local es=0 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.304 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ib= --ob= 00:09:56.563 [2024-11-17 01:30:04.773816] spdk_dd.c:1487:main: *ERROR*: You may specify either --if or --ib, but not both. 00:09:56.563 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@655 -- # es=22 00:09:56.563 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.564 00:09:56.564 real 0m0.170s 00:09:56.564 user 0m0.096s 00:09:56.564 sys 0m0.072s 00:09:56.564 ************************************ 00:09:56.564 END TEST dd_double_input 00:09:56.564 ************************************ 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_input -- common/autotest_common.sh@10 -- # set +x 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@218 -- # run_test dd_double_output double_output 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.564 ************************************ 00:09:56.564 START TEST dd_double_output 00:09:56.564 ************************************ 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1129 -- # double_output 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- dd/negative_dd.sh@27 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@652 -- # local es=0 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.564 01:30:04 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --ob= 00:09:56.564 [2024-11-17 01:30:04.975952] spdk_dd.c:1493:main: *ERROR*: You may specify either --of or --ob, but not both. 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@655 -- # es=22 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.824 00:09:56.824 real 0m0.147s 00:09:56.824 user 0m0.078s 00:09:56.824 sys 0m0.066s 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_double_output -- common/autotest_common.sh@10 -- # set +x 00:09:56.824 ************************************ 00:09:56.824 END TEST dd_double_output 00:09:56.824 ************************************ 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@219 -- # run_test dd_no_input no_input 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:56.824 ************************************ 00:09:56.824 START TEST dd_no_input 00:09:56.824 ************************************ 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1129 -- # no_input 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- dd/negative_dd.sh@35 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@652 -- # local es=0 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ob= 00:09:56.824 [2024-11-17 01:30:05.173532] spdk_dd.c:1499:main: *ERROR*: You must specify either --if or --ib 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@655 -- # es=22 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:56.824 00:09:56.824 real 0m0.139s 00:09:56.824 user 0m0.077s 00:09:56.824 sys 0m0.061s 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:56.824 ************************************ 00:09:56.824 END TEST dd_no_input 00:09:56.824 ************************************ 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative.dd_no_input -- common/autotest_common.sh@10 -- # set +x 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@220 -- # run_test dd_no_output no_output 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.824 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:57.084 ************************************ 00:09:57.084 START TEST dd_no_output 00:09:57.084 ************************************ 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1129 -- # no_output 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- dd/negative_dd.sh@41 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@652 -- # local es=0 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 00:09:57.084 [2024-11-17 01:30:05.370441] spdk_dd.c:1505:main: *ERROR*: You must specify either --of or --ob 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@655 -- # es=22 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.084 ************************************ 00:09:57.084 END TEST dd_no_output 00:09:57.084 ************************************ 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.084 00:09:57.084 real 0m0.141s 00:09:57.084 user 0m0.072s 00:09:57.084 sys 0m0.066s 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_no_output -- common/autotest_common.sh@10 -- # set +x 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@221 -- # run_test dd_wrong_blocksize wrong_blocksize 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:57.084 ************************************ 00:09:57.084 START TEST dd_wrong_blocksize 00:09:57.084 ************************************ 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1129 -- # wrong_blocksize 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- dd/negative_dd.sh@47 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:57.084 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=0 00:09:57.344 [2024-11-17 01:30:05.596888] spdk_dd.c:1511:main: *ERROR*: Invalid --bs value 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@655 -- # es=22 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:57.344 00:09:57.344 real 0m0.173s 00:09:57.344 user 0m0.096s 00:09:57.344 sys 0m0.074s 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:57.344 ************************************ 00:09:57.344 END TEST dd_wrong_blocksize 00:09:57.344 ************************************ 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_wrong_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@222 -- # run_test dd_smaller_blocksize smaller_blocksize 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:57.344 ************************************ 00:09:57.344 START TEST dd_smaller_blocksize 00:09:57.344 ************************************ 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1129 -- # smaller_blocksize 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- dd/negative_dd.sh@55 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@652 -- # local es=0 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:57.344 01:30:05 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --bs=99999999999999 00:09:57.603 [2024-11-17 01:30:05.818157] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:09:57.603 [2024-11-17 01:30:05.818328] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63910 ] 00:09:57.603 [2024-11-17 01:30:05.999684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:57.862 [2024-11-17 01:30:06.117789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:57.862 [2024-11-17 01:30:06.296287] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:09:58.431 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:58.689 EAL: eal_memalloc_alloc_seg_bulk(): couldn't find suitable memseg_list 00:09:58.689 [2024-11-17 01:30:07.042495] spdk_dd.c:1184:dd_run: *ERROR*: Cannot allocate memory - try smaller block size value 00:09:58.689 [2024-11-17 01:30:07.042595] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:59.624 [2024-11-17 01:30:07.747519] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@655 -- # es=244 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@664 -- # es=116 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@665 -- # case "$es" in 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@672 -- # es=1 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:59.624 ************************************ 00:09:59.624 END TEST dd_smaller_blocksize 00:09:59.624 ************************************ 00:09:59.624 00:09:59.624 real 0m2.303s 00:09:59.624 user 0m1.520s 00:09:59.624 sys 0m0.667s 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative.dd_smaller_blocksize -- common/autotest_common.sh@10 -- # set +x 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@223 -- # run_test dd_invalid_count invalid_count 00:09:59.624 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:59.625 ************************************ 00:09:59.625 START TEST dd_invalid_count 00:09:59.625 ************************************ 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1129 -- # invalid_count 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- dd/negative_dd.sh@63 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@652 -- # local es=0 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:59.625 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --count=-9 00:09:59.884 [2024-11-17 01:30:08.173693] spdk_dd.c:1517:main: *ERROR*: Invalid --count value 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@655 -- # es=22 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:59.884 00:09:59.884 real 0m0.165s 00:09:59.884 user 0m0.090s 00:09:59.884 sys 0m0.072s 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_count -- common/autotest_common.sh@10 -- # set +x 00:09:59.884 ************************************ 00:09:59.884 END TEST dd_invalid_count 00:09:59.884 ************************************ 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@224 -- # run_test dd_invalid_oflag invalid_oflag 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:09:59.884 ************************************ 00:09:59.884 START TEST dd_invalid_oflag 00:09:59.884 ************************************ 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1129 -- # invalid_oflag 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- dd/negative_dd.sh@71 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@652 -- # local es=0 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:09:59.884 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --oflag=0 00:10:00.143 [2024-11-17 01:30:08.387274] spdk_dd.c:1523:main: *ERROR*: --oflags may be used only with --of 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@655 -- # es=22 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.143 ************************************ 00:10:00.143 END TEST dd_invalid_oflag 00:10:00.143 ************************************ 00:10:00.143 00:10:00.143 real 0m0.170s 00:10:00.143 user 0m0.098s 00:10:00.143 sys 0m0.069s 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_oflag -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@225 -- # run_test dd_invalid_iflag invalid_iflag 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:00.143 ************************************ 00:10:00.143 START TEST dd_invalid_iflag 00:10:00.143 ************************************ 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1129 -- # invalid_iflag 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- dd/negative_dd.sh@79 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@652 -- # local es=0 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:00.143 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib= --ob= --iflag=0 00:10:00.402 [2024-11-17 01:30:08.606592] spdk_dd.c:1529:main: *ERROR*: --iflags may be used only with --if 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@655 -- # es=22 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.402 ************************************ 00:10:00.402 END TEST dd_invalid_iflag 00:10:00.402 ************************************ 00:10:00.402 00:10:00.402 real 0m0.172s 00:10:00.402 user 0m0.089s 00:10:00.402 sys 0m0.079s 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_invalid_iflag -- common/autotest_common.sh@10 -- # set +x 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@226 -- # run_test dd_unknown_flag unknown_flag 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:00.402 ************************************ 00:10:00.402 START TEST dd_unknown_flag 00:10:00.402 ************************************ 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1129 -- # unknown_flag 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- dd/negative_dd.sh@87 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@652 -- # local es=0 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:00.402 01:30:08 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --oflag=-1 00:10:00.402 [2024-11-17 01:30:08.842020] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:00.402 [2024-11-17 01:30:08.842279] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64023 ] 00:10:00.662 [2024-11-17 01:30:09.036621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.921 [2024-11-17 01:30:09.161358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.921 [2024-11-17 01:30:09.357594] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:01.179 [2024-11-17 01:30:09.455724] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:01.179 [2024-11-17 01:30:09.455809] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.179 [2024-11-17 01:30:09.455887] spdk_dd.c: 986:parse_flags: *ERROR*: Unknown file flag: -1 00:10:01.179 [2024-11-17 01:30:09.455912] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.179 [2024-11-17 01:30:09.456167] spdk_dd.c:1218:dd_run: *ERROR*: Failed to register files with io_uring: -9 (Bad file descriptor) 00:10:01.179 [2024-11-17 01:30:09.456208] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:01.179 [2024-11-17 01:30:09.456273] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:01.179 [2024-11-17 01:30:09.456292] app.c:1049:app_stop: *NOTICE*: spdk_app_stop called twice 00:10:01.747 [2024-11-17 01:30:10.157613] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@655 -- # es=234 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@664 -- # es=106 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@665 -- # case "$es" in 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@672 -- # es=1 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.005 00:10:02.005 real 0m1.710s 00:10:02.005 user 0m1.386s 00:10:02.005 sys 0m0.215s 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.005 01:30:10 spdk_dd.spdk_dd_negative.dd_unknown_flag -- common/autotest_common.sh@10 -- # set +x 00:10:02.005 ************************************ 00:10:02.005 END TEST dd_unknown_flag 00:10:02.005 ************************************ 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@227 -- # run_test dd_invalid_json invalid_json 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:02.264 ************************************ 00:10:02.264 START TEST dd_invalid_json 00:10:02.264 ************************************ 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1129 -- # invalid_json 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@652 -- # local es=0 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- dd/negative_dd.sh@94 -- # : 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:02.264 01:30:10 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --of=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump1 --json /dev/fd/62 00:10:02.264 [2024-11-17 01:30:10.596377] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:02.264 [2024-11-17 01:30:10.596522] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64069 ] 00:10:02.523 [2024-11-17 01:30:10.770324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:02.523 [2024-11-17 01:30:10.872879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:02.523 [2024-11-17 01:30:10.872978] json_config.c: 535:parse_json: *ERROR*: JSON data cannot be empty 00:10:02.523 [2024-11-17 01:30:10.873003] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:10:02.523 [2024-11-17 01:30:10.873019] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:02.523 [2024-11-17 01:30:10.873099] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@655 -- # es=234 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@664 -- # es=106 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@665 -- # case "$es" in 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@672 -- # es=1 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:02.782 00:10:02.782 real 0m0.645s 00:10:02.782 user 0m0.411s 00:10:02.782 sys 0m0.128s 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_json -- common/autotest_common.sh@10 -- # set +x 00:10:02.782 ************************************ 00:10:02.782 END TEST dd_invalid_json 00:10:02.782 ************************************ 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@228 -- # run_test dd_invalid_seek invalid_seek 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:02.782 01:30:11 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:02.782 ************************************ 00:10:02.782 START TEST dd_invalid_seek 00:10:02.782 ************************************ 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1129 -- # invalid_seek 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@102 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@103 -- # local -A method_bdev_malloc_create_0 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@108 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@109 -- # local -A method_bdev_malloc_create_1 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@652 -- # local es=0 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/negative_dd.sh@115 -- # gen_conf 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- dd/common.sh@31 -- # xtrace_disable 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:02.783 01:30:11 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --seek=513 --json /dev/fd/62 --bs=512 00:10:03.042 { 00:10:03.042 "subsystems": [ 00:10:03.042 { 00:10:03.042 "subsystem": "bdev", 00:10:03.042 "config": [ 00:10:03.042 { 00:10:03.042 "params": { 00:10:03.042 "block_size": 512, 00:10:03.042 "num_blocks": 512, 00:10:03.042 "name": "malloc0" 00:10:03.042 }, 00:10:03.042 "method": "bdev_malloc_create" 00:10:03.042 }, 00:10:03.042 { 00:10:03.042 "params": { 00:10:03.042 "block_size": 512, 00:10:03.042 "num_blocks": 512, 00:10:03.042 "name": "malloc1" 00:10:03.042 }, 00:10:03.042 "method": "bdev_malloc_create" 00:10:03.042 }, 00:10:03.042 { 00:10:03.042 "method": "bdev_wait_for_examine" 00:10:03.042 } 00:10:03.042 ] 00:10:03.042 } 00:10:03.042 ] 00:10:03.042 } 00:10:03.042 [2024-11-17 01:30:11.289465] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:03.042 [2024-11-17 01:30:11.289640] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64105 ] 00:10:03.042 [2024-11-17 01:30:11.473872] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.314 [2024-11-17 01:30:11.598516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:03.606 [2024-11-17 01:30:11.808122] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:03.606 [2024-11-17 01:30:11.934766] spdk_dd.c:1145:dd_run: *ERROR*: --seek value too big (513) - only 512 blocks available in output 00:10:03.606 [2024-11-17 01:30:11.934864] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:04.551 [2024-11-17 01:30:12.659192] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@655 -- # es=228 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@664 -- # es=100 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@665 -- # case "$es" in 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@672 -- # es=1 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:04.551 00:10:04.551 real 0m1.746s 00:10:04.551 user 0m1.496s 00:10:04.551 sys 0m0.204s 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:04.551 ************************************ 00:10:04.551 END TEST dd_invalid_seek 00:10:04.551 ************************************ 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_seek -- common/autotest_common.sh@10 -- # set +x 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@229 -- # run_test dd_invalid_skip invalid_skip 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:04.551 ************************************ 00:10:04.551 START TEST dd_invalid_skip 00:10:04.551 ************************************ 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1129 -- # invalid_skip 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@125 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:04.551 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@126 -- # local -A method_bdev_malloc_create_0 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@131 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@132 -- # local -A method_bdev_malloc_create_1 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@652 -- # local es=0 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/negative_dd.sh@138 -- # gen_conf 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- dd/common.sh@31 -- # xtrace_disable 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:04.552 01:30:12 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --skip=513 --json /dev/fd/62 --bs=512 00:10:04.810 { 00:10:04.810 "subsystems": [ 00:10:04.810 { 00:10:04.810 "subsystem": "bdev", 00:10:04.810 "config": [ 00:10:04.810 { 00:10:04.810 "params": { 00:10:04.810 "block_size": 512, 00:10:04.810 "num_blocks": 512, 00:10:04.810 "name": "malloc0" 00:10:04.810 }, 00:10:04.810 "method": "bdev_malloc_create" 00:10:04.810 }, 00:10:04.810 { 00:10:04.810 "params": { 00:10:04.810 "block_size": 512, 00:10:04.810 "num_blocks": 512, 00:10:04.810 "name": "malloc1" 00:10:04.810 }, 00:10:04.810 "method": "bdev_malloc_create" 00:10:04.810 }, 00:10:04.810 { 00:10:04.810 "method": "bdev_wait_for_examine" 00:10:04.810 } 00:10:04.810 ] 00:10:04.810 } 00:10:04.810 ] 00:10:04.810 } 00:10:04.810 [2024-11-17 01:30:13.088284] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:04.811 [2024-11-17 01:30:13.088446] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64145 ] 00:10:05.070 [2024-11-17 01:30:13.271084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.070 [2024-11-17 01:30:13.373640] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.329 [2024-11-17 01:30:13.554001] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:05.329 [2024-11-17 01:30:13.679898] spdk_dd.c:1102:dd_run: *ERROR*: --skip value too big (513) - only 512 blocks available in input 00:10:05.329 [2024-11-17 01:30:13.679996] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:06.268 [2024-11-17 01:30:14.403646] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@655 -- # es=228 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@664 -- # es=100 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@665 -- # case "$es" in 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@672 -- # es=1 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:06.268 00:10:06.268 real 0m1.687s 00:10:06.268 user 0m1.427s 00:10:06.268 sys 0m0.205s 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_skip -- common/autotest_common.sh@10 -- # set +x 00:10:06.268 ************************************ 00:10:06.268 END TEST dd_invalid_skip 00:10:06.268 ************************************ 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@230 -- # run_test dd_invalid_input_count invalid_input_count 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:06.268 ************************************ 00:10:06.268 START TEST dd_invalid_input_count 00:10:06.268 ************************************ 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1129 -- # invalid_input_count 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@149 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@150 -- # local -A method_bdev_malloc_create_0 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@155 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@156 -- # local -A method_bdev_malloc_create_1 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@652 -- # local es=0 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/negative_dd.sh@162 -- # gen_conf 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- dd/common.sh@31 -- # xtrace_disable 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:06.268 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:06.269 01:30:14 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --count=513 --json /dev/fd/62 --bs=512 00:10:06.528 { 00:10:06.528 "subsystems": [ 00:10:06.528 { 00:10:06.528 "subsystem": "bdev", 00:10:06.528 "config": [ 00:10:06.528 { 00:10:06.528 "params": { 00:10:06.528 "block_size": 512, 00:10:06.528 "num_blocks": 512, 00:10:06.528 "name": "malloc0" 00:10:06.528 }, 00:10:06.528 "method": "bdev_malloc_create" 00:10:06.528 }, 00:10:06.528 { 00:10:06.528 "params": { 00:10:06.528 "block_size": 512, 00:10:06.528 "num_blocks": 512, 00:10:06.528 "name": "malloc1" 00:10:06.528 }, 00:10:06.528 "method": "bdev_malloc_create" 00:10:06.528 }, 00:10:06.528 { 00:10:06.528 "method": "bdev_wait_for_examine" 00:10:06.528 } 00:10:06.528 ] 00:10:06.528 } 00:10:06.528 ] 00:10:06.528 } 00:10:06.528 [2024-11-17 01:30:14.828364] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:06.528 [2024-11-17 01:30:14.828537] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64196 ] 00:10:06.788 [2024-11-17 01:30:15.014907] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:06.788 [2024-11-17 01:30:15.140053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:07.047 [2024-11-17 01:30:15.378210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:07.306 [2024-11-17 01:30:15.524757] spdk_dd.c:1110:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available from input 00:10:07.306 [2024-11-17 01:30:15.524861] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:07.875 [2024-11-17 01:30:16.168324] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@655 -- # es=228 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@664 -- # es=100 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@665 -- # case "$es" in 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@672 -- # es=1 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:08.134 00:10:08.134 real 0m1.669s 00:10:08.134 user 0m1.404s 00:10:08.134 sys 0m0.217s 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:08.134 ************************************ 00:10:08.134 END TEST dd_invalid_input_count 00:10:08.134 ************************************ 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_input_count -- common/autotest_common.sh@10 -- # set +x 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@231 -- # run_test dd_invalid_output_count invalid_output_count 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:08.134 ************************************ 00:10:08.134 START TEST dd_invalid_output_count 00:10:08.134 ************************************ 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1129 -- # invalid_output_count 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@173 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@174 -- # local -A method_bdev_malloc_create_0 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@652 -- # local es=0 00:10:08.134 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/negative_dd.sh@180 -- # gen_conf 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- dd/common.sh@31 -- # xtrace_disable 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:08.135 01:30:16 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/dd/dd.dump0 --ob=malloc0 --count=513 --json /dev/fd/62 --bs=512 00:10:08.135 { 00:10:08.135 "subsystems": [ 00:10:08.135 { 00:10:08.135 "subsystem": "bdev", 00:10:08.135 "config": [ 00:10:08.135 { 00:10:08.135 "params": { 00:10:08.135 "block_size": 512, 00:10:08.135 "num_blocks": 512, 00:10:08.135 "name": "malloc0" 00:10:08.135 }, 00:10:08.135 "method": "bdev_malloc_create" 00:10:08.135 }, 00:10:08.135 { 00:10:08.135 "method": "bdev_wait_for_examine" 00:10:08.135 } 00:10:08.135 ] 00:10:08.135 } 00:10:08.135 ] 00:10:08.135 } 00:10:08.135 [2024-11-17 01:30:16.518750] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:08.135 [2024-11-17 01:30:16.518896] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64246 ] 00:10:08.394 [2024-11-17 01:30:16.684715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:08.394 [2024-11-17 01:30:16.777000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.653 [2024-11-17 01:30:16.926195] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:08.654 [2024-11-17 01:30:17.022949] spdk_dd.c:1152:dd_run: *ERROR*: --count value too big (513) - only 512 blocks available in output 00:10:08.654 [2024-11-17 01:30:17.023035] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:09.222 [2024-11-17 01:30:17.618546] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@655 -- # es=228 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@664 -- # es=100 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@665 -- # case "$es" in 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@672 -- # es=1 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:09.481 00:10:09.481 real 0m1.429s 00:10:09.481 user 0m1.197s 00:10:09.481 sys 0m0.174s 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:09.481 ************************************ 00:10:09.481 END TEST dd_invalid_output_count 00:10:09.481 ************************************ 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_invalid_output_count -- common/autotest_common.sh@10 -- # set +x 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative -- dd/negative_dd.sh@232 -- # run_test dd_bs_not_multiple bs_not_multiple 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:09.481 ************************************ 00:10:09.481 START TEST dd_bs_not_multiple 00:10:09.481 ************************************ 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1129 -- # bs_not_multiple 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@190 -- # local mbdev0=malloc0 mbdev0_b=512 mbdev0_bs=512 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # method_bdev_malloc_create_0=(['name']='malloc0' ['num_blocks']='512' ['block_size']='512') 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@191 -- # local -A method_bdev_malloc_create_0 00:10:09.481 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@196 -- # local mbdev1=malloc1 mbdev1_b=512 mbdev1_bs=512 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # method_bdev_malloc_create_1=(['name']='malloc1' ['num_blocks']='512' ['block_size']='512') 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@197 -- # local -A method_bdev_malloc_create_1 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@652 -- # local es=0 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/negative_dd.sh@203 -- # gen_conf 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- dd/common.sh@31 -- # xtrace_disable 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd ]] 00:10:09.482 01:30:17 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=malloc0 --ob=malloc1 --bs=513 --json /dev/fd/62 00:10:09.741 { 00:10:09.741 "subsystems": [ 00:10:09.741 { 00:10:09.741 "subsystem": "bdev", 00:10:09.741 "config": [ 00:10:09.741 { 00:10:09.741 "params": { 00:10:09.741 "block_size": 512, 00:10:09.741 "num_blocks": 512, 00:10:09.741 "name": "malloc0" 00:10:09.741 }, 00:10:09.741 "method": "bdev_malloc_create" 00:10:09.741 }, 00:10:09.741 { 00:10:09.741 "params": { 00:10:09.741 "block_size": 512, 00:10:09.741 "num_blocks": 512, 00:10:09.741 "name": "malloc1" 00:10:09.741 }, 00:10:09.741 "method": "bdev_malloc_create" 00:10:09.741 }, 00:10:09.741 { 00:10:09.741 "method": "bdev_wait_for_examine" 00:10:09.741 } 00:10:09.741 ] 00:10:09.741 } 00:10:09.741 ] 00:10:09.741 } 00:10:09.741 [2024-11-17 01:30:18.031541] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:09.741 [2024-11-17 01:30:18.031710] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64285 ] 00:10:10.000 [2024-11-17 01:30:18.209953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.000 [2024-11-17 01:30:18.290119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:10.000 [2024-11-17 01:30:18.435186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:10.259 [2024-11-17 01:30:18.542557] spdk_dd.c:1168:dd_run: *ERROR*: --bs value must be a multiple of input native block size (512) 00:10:10.259 [2024-11-17 01:30:18.542632] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:10.828 [2024-11-17 01:30:19.145846] spdk_dd.c:1536:main: *ERROR*: Error occurred while performing copy 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@655 -- # es=234 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@664 -- # es=106 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@665 -- # case "$es" in 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@672 -- # es=1 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:11.087 00:10:11.087 real 0m1.472s 00:10:11.087 user 0m1.224s 00:10:11.087 sys 0m0.202s 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative.dd_bs_not_multiple -- common/autotest_common.sh@10 -- # set +x 00:10:11.087 ************************************ 00:10:11.087 END TEST dd_bs_not_multiple 00:10:11.087 ************************************ 00:10:11.087 00:10:11.087 real 0m15.170s 00:10:11.087 user 0m11.245s 00:10:11.087 sys 0m3.229s 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.087 01:30:19 spdk_dd.spdk_dd_negative -- common/autotest_common.sh@10 -- # set +x 00:10:11.087 ************************************ 00:10:11.087 END TEST spdk_dd_negative 00:10:11.087 ************************************ 00:10:11.087 ************************************ 00:10:11.087 END TEST spdk_dd 00:10:11.087 ************************************ 00:10:11.087 00:10:11.087 real 2m45.692s 00:10:11.087 user 2m12.904s 00:10:11.087 sys 1m0.835s 00:10:11.087 01:30:19 spdk_dd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.087 01:30:19 spdk_dd -- common/autotest_common.sh@10 -- # set +x 00:10:11.088 01:30:19 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:11.088 01:30:19 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:10:11.088 01:30:19 -- spdk/autotest.sh@260 -- # timing_exit lib 00:10:11.088 01:30:19 -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:11.088 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:10:11.346 01:30:19 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:10:11.346 01:30:19 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:10:11.346 01:30:19 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:10:11.346 01:30:19 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:10:11.346 01:30:19 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:10:11.346 01:30:19 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:10:11.346 01:30:19 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:11.346 01:30:19 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.346 01:30:19 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.346 01:30:19 -- common/autotest_common.sh@10 -- # set +x 00:10:11.346 ************************************ 00:10:11.346 START TEST nvmf_tcp 00:10:11.346 ************************************ 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:11.346 * Looking for test storage... 00:10:11.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.346 01:30:19 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.346 --rc genhtml_branch_coverage=1 00:10:11.346 --rc genhtml_function_coverage=1 00:10:11.346 --rc genhtml_legend=1 00:10:11.346 --rc geninfo_all_blocks=1 00:10:11.346 --rc geninfo_unexecuted_blocks=1 00:10:11.346 00:10:11.346 ' 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.346 --rc genhtml_branch_coverage=1 00:10:11.346 --rc genhtml_function_coverage=1 00:10:11.346 --rc genhtml_legend=1 00:10:11.346 --rc geninfo_all_blocks=1 00:10:11.346 --rc geninfo_unexecuted_blocks=1 00:10:11.346 00:10:11.346 ' 00:10:11.346 01:30:19 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.346 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.346 --rc genhtml_branch_coverage=1 00:10:11.346 --rc genhtml_function_coverage=1 00:10:11.347 --rc genhtml_legend=1 00:10:11.347 --rc geninfo_all_blocks=1 00:10:11.347 --rc geninfo_unexecuted_blocks=1 00:10:11.347 00:10:11.347 ' 00:10:11.347 01:30:19 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.347 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.347 --rc genhtml_branch_coverage=1 00:10:11.347 --rc genhtml_function_coverage=1 00:10:11.347 --rc genhtml_legend=1 00:10:11.347 --rc geninfo_all_blocks=1 00:10:11.347 --rc geninfo_unexecuted_blocks=1 00:10:11.347 00:10:11.347 ' 00:10:11.347 01:30:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:10:11.347 01:30:19 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:11.347 01:30:19 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:11.347 01:30:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.347 01:30:19 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.347 01:30:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:11.347 ************************************ 00:10:11.347 START TEST nvmf_target_core 00:10:11.347 ************************************ 00:10:11.347 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:10:11.605 * Looking for test storage... 00:10:11.605 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.605 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.606 --rc genhtml_branch_coverage=1 00:10:11.606 --rc genhtml_function_coverage=1 00:10:11.606 --rc genhtml_legend=1 00:10:11.606 --rc geninfo_all_blocks=1 00:10:11.606 --rc geninfo_unexecuted_blocks=1 00:10:11.606 00:10:11.606 ' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.606 --rc genhtml_branch_coverage=1 00:10:11.606 --rc genhtml_function_coverage=1 00:10:11.606 --rc genhtml_legend=1 00:10:11.606 --rc geninfo_all_blocks=1 00:10:11.606 --rc geninfo_unexecuted_blocks=1 00:10:11.606 00:10:11.606 ' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.606 --rc genhtml_branch_coverage=1 00:10:11.606 --rc genhtml_function_coverage=1 00:10:11.606 --rc genhtml_legend=1 00:10:11.606 --rc geninfo_all_blocks=1 00:10:11.606 --rc geninfo_unexecuted_blocks=1 00:10:11.606 00:10:11.606 ' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.606 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.606 --rc genhtml_branch_coverage=1 00:10:11.606 --rc genhtml_function_coverage=1 00:10:11.606 --rc genhtml_legend=1 00:10:11.606 --rc geninfo_all_blocks=1 00:10:11.606 --rc geninfo_unexecuted_blocks=1 00:10:11.606 00:10:11.606 ' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.606 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 1 -eq 0 ]] 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.606 ************************************ 00:10:11.606 START TEST nvmf_host_management 00:10:11.606 ************************************ 00:10:11.606 01:30:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:11.606 * Looking for test storage... 00:10:11.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.867 --rc genhtml_branch_coverage=1 00:10:11.867 --rc genhtml_function_coverage=1 00:10:11.867 --rc genhtml_legend=1 00:10:11.867 --rc geninfo_all_blocks=1 00:10:11.867 --rc geninfo_unexecuted_blocks=1 00:10:11.867 00:10:11.867 ' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.867 --rc genhtml_branch_coverage=1 00:10:11.867 --rc genhtml_function_coverage=1 00:10:11.867 --rc genhtml_legend=1 00:10:11.867 --rc geninfo_all_blocks=1 00:10:11.867 --rc geninfo_unexecuted_blocks=1 00:10:11.867 00:10:11.867 ' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.867 --rc genhtml_branch_coverage=1 00:10:11.867 --rc genhtml_function_coverage=1 00:10:11.867 --rc genhtml_legend=1 00:10:11.867 --rc geninfo_all_blocks=1 00:10:11.867 --rc geninfo_unexecuted_blocks=1 00:10:11.867 00:10:11.867 ' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:11.867 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:11.867 --rc genhtml_branch_coverage=1 00:10:11.867 --rc genhtml_function_coverage=1 00:10:11.867 --rc genhtml_legend=1 00:10:11.867 --rc geninfo_all_blocks=1 00:10:11.867 --rc geninfo_unexecuted_blocks=1 00:10:11.867 00:10:11.867 ' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.867 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:11.868 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:11.868 Cannot find device "nvmf_init_br" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:11.868 Cannot find device "nvmf_init_br2" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:11.868 Cannot find device "nvmf_tgt_br" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@164 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:11.868 Cannot find device "nvmf_tgt_br2" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@165 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:11.868 Cannot find device "nvmf_init_br" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@166 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:11.868 Cannot find device "nvmf_init_br2" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@167 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:11.868 Cannot find device "nvmf_tgt_br" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@168 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:11.868 Cannot find device "nvmf_tgt_br2" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:11.868 Cannot find device "nvmf_br" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@170 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:11.868 Cannot find device "nvmf_init_if" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:11.868 Cannot find device "nvmf_init_if2" 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:11.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@173 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:11.868 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@174 -- # true 00:10:11.868 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:12.127 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:12.387 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:12.387 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:10:12.387 00:10:12.387 --- 10.0.0.3 ping statistics --- 00:10:12.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.387 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:12.387 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:12.387 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:10:12.387 00:10:12.387 --- 10.0.0.4 ping statistics --- 00:10:12.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.387 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:12.387 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.387 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:10:12.387 00:10:12.387 --- 10.0.0.1 ping statistics --- 00:10:12.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.387 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:12.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:10:12.387 00:10:12.387 --- 10.0.0.2 ping statistics --- 00:10:12.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.387 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@461 -- # return 0 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=64634 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 64634 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64634 ']' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:12.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:12.387 01:30:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:12.387 [2024-11-17 01:30:20.794464] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:12.387 [2024-11-17 01:30:20.794602] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.646 [2024-11-17 01:30:20.975083] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:12.904 [2024-11-17 01:30:21.104106] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.904 [2024-11-17 01:30:21.104184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.904 [2024-11-17 01:30:21.104208] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.904 [2024-11-17 01:30:21.104223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.904 [2024-11-17 01:30:21.104239] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.904 [2024-11-17 01:30:21.106430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:12.904 [2024-11-17 01:30:21.106546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:12.904 [2024-11-17 01:30:21.106661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:12.904 [2024-11-17 01:30:21.106942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.904 [2024-11-17 01:30:21.324312] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:13.470 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:13.470 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:13.470 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:13.470 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.470 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.470 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.471 [2024-11-17 01:30:21.832155] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.471 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.729 Malloc0 00:10:13.729 [2024-11-17 01:30:21.950233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=64693 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 64693 /var/tmp/bdevperf.sock 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 64693 ']' 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:13.729 { 00:10:13.729 "params": { 00:10:13.729 "name": "Nvme$subsystem", 00:10:13.729 "trtype": "$TEST_TRANSPORT", 00:10:13.729 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:13.729 "adrfam": "ipv4", 00:10:13.729 "trsvcid": "$NVMF_PORT", 00:10:13.729 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:13.729 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:13.729 "hdgst": ${hdgst:-false}, 00:10:13.729 "ddgst": ${ddgst:-false} 00:10:13.729 }, 00:10:13.729 "method": "bdev_nvme_attach_controller" 00:10:13.729 } 00:10:13.729 EOF 00:10:13.729 )") 00:10:13.729 01:30:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:13.729 01:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:13.729 01:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:13.729 01:30:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:13.729 "params": { 00:10:13.729 "name": "Nvme0", 00:10:13.729 "trtype": "tcp", 00:10:13.729 "traddr": "10.0.0.3", 00:10:13.729 "adrfam": "ipv4", 00:10:13.729 "trsvcid": "4420", 00:10:13.729 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:13.729 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:13.729 "hdgst": false, 00:10:13.729 "ddgst": false 00:10:13.729 }, 00:10:13.729 "method": "bdev_nvme_attach_controller" 00:10:13.729 }' 00:10:13.729 [2024-11-17 01:30:22.126302] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:13.729 [2024-11-17 01:30:22.126705] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64693 ] 00:10:13.988 [2024-11-17 01:30:22.312675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:13.988 [2024-11-17 01:30:22.435099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:14.246 [2024-11-17 01:30:22.642790] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:14.504 Running I/O for 10 seconds... 00:10:14.763 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:14.763 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:10:14.763 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=323 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 323 -ge 100 ']' 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.764 01:30:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:14.764 [2024-11-17 01:30:23.195499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:57344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:57472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:57600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:57728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:57856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:57984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:58112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:58240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:58368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:58496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:58624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:58752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:58880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.195971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.195994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:59008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:59136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:59264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:59392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:59520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:59776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.764 [2024-11-17 01:30:23.196236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.764 [2024-11-17 01:30:23.196251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:60160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:60288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:60416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:60544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:60672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:60800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:60928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:62208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:62592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:62848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.196973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.196987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:63104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.197015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.197043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:63360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.197083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.197112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.197143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.765 [2024-11-17 01:30:23.197172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.765 [2024-11-17 01:30:23.197187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:64000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:64512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:64640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:64768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:65024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:65152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:65280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:14.766 [2024-11-17 01:30:23.197543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.197559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:10:14.766 [2024-11-17 01:30:23.197972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.766 [2024-11-17 01:30:23.198005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.198023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.766 [2024-11-17 01:30:23.198036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.198051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.766 [2024-11-17 01:30:23.198064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.198078] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:14.766 [2024-11-17 01:30:23.198091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:14.766 [2024-11-17 01:30:23.198103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:10:14.766 task offset: 57344 on job bdev=Nvme0n1 fails 00:10:14.766 00:10:14.766 Latency(us) 00:10:14.766 [2024-11-17T01:30:23.225Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.766 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:14.766 Job: Nvme0n1 ended in about 0.37 seconds with error 00:10:14.766 Verification LBA range: start 0x0 length 0x400 00:10:14.766 Nvme0n1 : 0.37 1220.51 76.28 174.36 0.00 44191.83 3053.38 42419.67 00:10:14.766 [2024-11-17T01:30:23.225Z] =================================================================================================================== 00:10:14.766 [2024-11-17T01:30:23.225Z] Total : 1220.51 76.28 174.36 0.00 44191.83 3053.38 42419.67 00:10:14.766 [2024-11-17 01:30:23.199393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:14.766 [2024-11-17 01:30:23.204600] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:14.766 [2024-11-17 01:30:23.204768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:10:14.766 [2024-11-17 01:30:23.214257] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 64693 00:10:16.142 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (64693) - No such process 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:16.142 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:16.142 { 00:10:16.142 "params": { 00:10:16.142 "name": "Nvme$subsystem", 00:10:16.142 "trtype": "$TEST_TRANSPORT", 00:10:16.142 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:16.142 "adrfam": "ipv4", 00:10:16.142 "trsvcid": "$NVMF_PORT", 00:10:16.142 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:16.142 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:16.142 "hdgst": ${hdgst:-false}, 00:10:16.142 "ddgst": ${ddgst:-false} 00:10:16.142 }, 00:10:16.142 "method": "bdev_nvme_attach_controller" 00:10:16.142 } 00:10:16.142 EOF 00:10:16.142 )") 00:10:16.143 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:16.143 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:16.143 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:16.143 01:30:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:16.143 "params": { 00:10:16.143 "name": "Nvme0", 00:10:16.143 "trtype": "tcp", 00:10:16.143 "traddr": "10.0.0.3", 00:10:16.143 "adrfam": "ipv4", 00:10:16.143 "trsvcid": "4420", 00:10:16.143 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:16.143 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:16.143 "hdgst": false, 00:10:16.143 "ddgst": false 00:10:16.143 }, 00:10:16.143 "method": "bdev_nvme_attach_controller" 00:10:16.143 }' 00:10:16.143 [2024-11-17 01:30:24.303361] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:16.143 [2024-11-17 01:30:24.303773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64732 ] 00:10:16.143 [2024-11-17 01:30:24.490576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:16.143 [2024-11-17 01:30:24.591399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.401 [2024-11-17 01:30:24.778650] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:16.659 Running I/O for 1 seconds... 00:10:17.594 1344.00 IOPS, 84.00 MiB/s 00:10:17.594 Latency(us) 00:10:17.594 [2024-11-17T01:30:26.053Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:17.594 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:17.594 Verification LBA range: start 0x0 length 0x400 00:10:17.594 Nvme0n1 : 1.04 1357.08 84.82 0.00 0.00 46275.08 5719.51 40989.79 00:10:17.594 [2024-11-17T01:30:26.053Z] =================================================================================================================== 00:10:17.594 [2024-11-17T01:30:26.053Z] Total : 1357.08 84.82 0.00 0.00 46275.08 5719.51 40989.79 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:18.529 01:30:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:18.788 rmmod nvme_tcp 00:10:18.788 rmmod nvme_fabrics 00:10:18.788 rmmod nvme_keyring 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 64634 ']' 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 64634 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 64634 ']' 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 64634 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64634 00:10:18.788 killing process with pid 64634 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64634' 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 64634 00:10:18.788 01:30:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 64634 00:10:20.164 [2024-11-17 01:30:28.190869] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:20.164 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:20.164 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:20.164 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:20.164 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:20.164 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@300 -- # return 0 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:20.165 00:10:20.165 real 0m8.539s 00:10:20.165 user 0m32.250s 00:10:20.165 sys 0m1.687s 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 ************************************ 00:10:20.165 END TEST nvmf_host_management 00:10:20.165 ************************************ 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:20.165 ************************************ 00:10:20.165 START TEST nvmf_lvol 00:10:20.165 ************************************ 00:10:20.165 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:20.424 * Looking for test storage... 00:10:20.424 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:20.424 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:20.424 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.425 --rc genhtml_branch_coverage=1 00:10:20.425 --rc genhtml_function_coverage=1 00:10:20.425 --rc genhtml_legend=1 00:10:20.425 --rc geninfo_all_blocks=1 00:10:20.425 --rc geninfo_unexecuted_blocks=1 00:10:20.425 00:10:20.425 ' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:20.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.425 --rc genhtml_branch_coverage=1 00:10:20.425 --rc genhtml_function_coverage=1 00:10:20.425 --rc genhtml_legend=1 00:10:20.425 --rc geninfo_all_blocks=1 00:10:20.425 --rc geninfo_unexecuted_blocks=1 00:10:20.425 00:10:20.425 ' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:20.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.425 --rc genhtml_branch_coverage=1 00:10:20.425 --rc genhtml_function_coverage=1 00:10:20.425 --rc genhtml_legend=1 00:10:20.425 --rc geninfo_all_blocks=1 00:10:20.425 --rc geninfo_unexecuted_blocks=1 00:10:20.425 00:10:20.425 ' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:20.425 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:20.425 --rc genhtml_branch_coverage=1 00:10:20.425 --rc genhtml_function_coverage=1 00:10:20.425 --rc genhtml_legend=1 00:10:20.425 --rc geninfo_all_blocks=1 00:10:20.425 --rc geninfo_unexecuted_blocks=1 00:10:20.425 00:10:20.425 ' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:20.425 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:20.425 Cannot find device "nvmf_init_br" 00:10:20.425 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:20.426 Cannot find device "nvmf_init_br2" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:20.426 Cannot find device "nvmf_tgt_br" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@164 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:20.426 Cannot find device "nvmf_tgt_br2" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@165 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:20.426 Cannot find device "nvmf_init_br" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@166 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:20.426 Cannot find device "nvmf_init_br2" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@167 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:20.426 Cannot find device "nvmf_tgt_br" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@168 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:20.426 Cannot find device "nvmf_tgt_br2" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:20.426 Cannot find device "nvmf_br" 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@170 -- # true 00:10:20.426 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:20.684 Cannot find device "nvmf_init_if" 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # true 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:20.684 Cannot find device "nvmf_init_if2" 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # true 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:20.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@173 -- # true 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:20.684 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@174 -- # true 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:20.684 01:30:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:20.684 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:20.685 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:20.942 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:20.942 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.086 ms 00:10:20.942 00:10:20.942 --- 10.0.0.3 ping statistics --- 00:10:20.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.942 rtt min/avg/max/mdev = 0.086/0.086/0.086/0.000 ms 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:20.942 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:20.942 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:10:20.942 00:10:20.942 --- 10.0.0.4 ping statistics --- 00:10:20.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.942 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:20.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:20.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:10:20.942 00:10:20.942 --- 10.0.0.1 ping statistics --- 00:10:20.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.942 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:20.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:20.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:10:20.942 00:10:20.942 --- 10.0.0.2 ping statistics --- 00:10:20.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:20.942 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@461 -- # return 0 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=65021 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 65021 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 65021 ']' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:20.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:20.942 01:30:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:20.942 [2024-11-17 01:30:29.320298] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:20.942 [2024-11-17 01:30:29.320442] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.201 [2024-11-17 01:30:29.492987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:21.201 [2024-11-17 01:30:29.584406] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.201 [2024-11-17 01:30:29.584465] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.201 [2024-11-17 01:30:29.584499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.201 [2024-11-17 01:30:29.584510] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.201 [2024-11-17 01:30:29.584523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.201 [2024-11-17 01:30:29.586205] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.201 [2024-11-17 01:30:29.586291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:21.201 [2024-11-17 01:30:29.586302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.460 [2024-11-17 01:30:29.747510] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.028 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:22.287 [2024-11-17 01:30:30.564062] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.287 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.546 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:22.546 01:30:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:22.805 01:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:22.805 01:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:23.065 01:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:23.665 01:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=bad1d108-f404-4e29-845c-ebd7a2b5827b 00:10:23.665 01:30:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u bad1d108-f404-4e29-845c-ebd7a2b5827b lvol 20 00:10:23.924 01:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=a47bea0b-048e-4211-a1be-46d07a273640 00:10:23.924 01:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:23.924 01:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a47bea0b-048e-4211-a1be-46d07a273640 00:10:24.183 01:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:24.442 [2024-11-17 01:30:32.859480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:24.442 01:30:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:25.009 01:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=65102 00:10:25.009 01:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:25.009 01:30:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:25.945 01:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a47bea0b-048e-4211-a1be-46d07a273640 MY_SNAPSHOT 00:10:26.203 01:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=f17518b5-dfac-4001-8be0-368d6f0398a3 00:10:26.203 01:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a47bea0b-048e-4211-a1be-46d07a273640 30 00:10:26.462 01:30:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone f17518b5-dfac-4001-8be0-368d6f0398a3 MY_CLONE 00:10:26.721 01:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=5afef682-26b4-49da-947a-7ab03833a176 00:10:26.721 01:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate 5afef682-26b4-49da-947a-7ab03833a176 00:10:27.288 01:30:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 65102 00:10:35.401 Initializing NVMe Controllers 00:10:35.401 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode0 00:10:35.401 Controller IO queue size 128, less than required. 00:10:35.401 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:35.401 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:35.401 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:35.401 Initialization complete. Launching workers. 00:10:35.401 ======================================================== 00:10:35.401 Latency(us) 00:10:35.401 Device Information : IOPS MiB/s Average min max 00:10:35.401 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 8937.90 34.91 14323.91 282.68 151551.80 00:10:35.401 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 8824.50 34.47 14503.24 5638.85 177343.88 00:10:35.401 ======================================================== 00:10:35.401 Total : 17762.40 69.38 14413.01 282.68 177343.88 00:10:35.401 00:10:35.401 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:35.401 01:30:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a47bea0b-048e-4211-a1be-46d07a273640 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bad1d108-f404-4e29-845c-ebd7a2b5827b 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:35.968 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:35.968 rmmod nvme_tcp 00:10:36.227 rmmod nvme_fabrics 00:10:36.227 rmmod nvme_keyring 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 65021 ']' 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 65021 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 65021 ']' 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 65021 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65021 00:10:36.227 killing process with pid 65021 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65021' 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 65021 00:10:36.227 01:30:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 65021 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@246 -- # remove_spdk_ns 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@300 -- # return 0 00:10:37.605 ************************************ 00:10:37.605 END TEST nvmf_lvol 00:10:37.605 ************************************ 00:10:37.605 00:10:37.605 real 0m17.369s 00:10:37.605 user 1m9.179s 00:10:37.605 sys 0m4.253s 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:37.605 ************************************ 00:10:37.605 START TEST nvmf_lvs_grow 00:10:37.605 ************************************ 00:10:37.605 01:30:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:37.866 * Looking for test storage... 00:10:37.866 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.866 --rc genhtml_branch_coverage=1 00:10:37.866 --rc genhtml_function_coverage=1 00:10:37.866 --rc genhtml_legend=1 00:10:37.866 --rc geninfo_all_blocks=1 00:10:37.866 --rc geninfo_unexecuted_blocks=1 00:10:37.866 00:10:37.866 ' 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.866 --rc genhtml_branch_coverage=1 00:10:37.866 --rc genhtml_function_coverage=1 00:10:37.866 --rc genhtml_legend=1 00:10:37.866 --rc geninfo_all_blocks=1 00:10:37.866 --rc geninfo_unexecuted_blocks=1 00:10:37.866 00:10:37.866 ' 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.866 --rc genhtml_branch_coverage=1 00:10:37.866 --rc genhtml_function_coverage=1 00:10:37.866 --rc genhtml_legend=1 00:10:37.866 --rc geninfo_all_blocks=1 00:10:37.866 --rc geninfo_unexecuted_blocks=1 00:10:37.866 00:10:37.866 ' 00:10:37.866 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.866 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.866 --rc genhtml_branch_coverage=1 00:10:37.866 --rc genhtml_function_coverage=1 00:10:37.866 --rc genhtml_legend=1 00:10:37.866 --rc geninfo_all_blocks=1 00:10:37.866 --rc geninfo_unexecuted_blocks=1 00:10:37.867 00:10:37.867 ' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:37.867 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@460 -- # nvmf_veth_init 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:10:37.867 Cannot find device "nvmf_init_br" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:10:37.867 Cannot find device "nvmf_init_br2" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:10:37.867 Cannot find device "nvmf_tgt_br" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@164 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:10:37.867 Cannot find device "nvmf_tgt_br2" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@165 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:10:37.867 Cannot find device "nvmf_init_br" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@166 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:10:37.867 Cannot find device "nvmf_init_br2" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@167 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:10:37.867 Cannot find device "nvmf_tgt_br" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@168 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:10:37.867 Cannot find device "nvmf_tgt_br2" 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # true 00:10:37.867 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:10:38.127 Cannot find device "nvmf_br" 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@170 -- # true 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:10:38.127 Cannot find device "nvmf_init_if" 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # true 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:10:38.127 Cannot find device "nvmf_init_if2" 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # true 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:10:38.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@173 -- # true 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:10:38.127 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@174 -- # true 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:10:38.127 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:10:38.386 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:10:38.386 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:10:38.386 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:10:38.386 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:10:38.386 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:10:38.386 00:10:38.386 --- 10.0.0.3 ping statistics --- 00:10:38.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.386 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:10:38.386 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:10:38.386 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:10:38.386 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:10:38.386 00:10:38.386 --- 10.0.0.4 ping statistics --- 00:10:38.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.386 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:10:38.386 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:10:38.386 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:38.386 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:10:38.386 00:10:38.386 --- 10.0.0.1 ping statistics --- 00:10:38.386 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.386 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:10:38.386 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:10:38.387 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:38.387 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:10:38.387 00:10:38.387 --- 10.0.0.2 ping statistics --- 00:10:38.387 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:38.387 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@461 -- # return 0 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=65488 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 65488 00:10:38.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 65488 ']' 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:38.387 01:30:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:38.387 [2024-11-17 01:30:46.762709] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:38.387 [2024-11-17 01:30:46.762958] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:38.646 [2024-11-17 01:30:46.947386] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:38.646 [2024-11-17 01:30:47.035671] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:38.646 [2024-11-17 01:30:47.035735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:38.646 [2024-11-17 01:30:47.035769] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:38.646 [2024-11-17 01:30:47.035791] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:38.646 [2024-11-17 01:30:47.035804] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:38.646 [2024-11-17 01:30:47.037049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:38.905 [2024-11-17 01:30:47.193425] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.473 01:30:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:39.731 [2024-11-17 01:30:48.003246] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:39.731 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:39.731 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:39.732 ************************************ 00:10:39.732 START TEST lvs_grow_clean 00:10:39.732 ************************************ 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:39.732 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:39.990 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:39.990 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:40.249 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=d63d653c-9700-4244-986d-3d68ccced76d 00:10:40.249 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:40.249 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:40.509 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:40.509 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:40.509 01:30:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u d63d653c-9700-4244-986d-3d68ccced76d lvol 150 00:10:40.768 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ea82c92f-9a40-4db0-b18e-b81e5de75fcc 00:10:40.768 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:40.768 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:41.026 [2024-11-17 01:30:49.408192] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:41.026 [2024-11-17 01:30:49.408330] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:41.026 true 00:10:41.026 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:41.026 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:41.285 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:41.285 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:41.545 01:30:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ea82c92f-9a40-4db0-b18e-b81e5de75fcc 00:10:41.804 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:10:42.063 [2024-11-17 01:30:50.437096] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:10:42.063 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:42.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65576 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65576 /var/tmp/bdevperf.sock 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 65576 ']' 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:42.323 01:30:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:42.583 [2024-11-17 01:30:50.820411] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:10:42.583 [2024-11-17 01:30:50.820878] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65576 ] 00:10:42.583 [2024-11-17 01:30:50.999610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.844 [2024-11-17 01:30:51.129038] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:43.104 [2024-11-17 01:30:51.320704] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:10:43.364 01:30:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:43.364 01:30:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:10:43.364 01:30:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:43.932 Nvme0n1 00:10:43.932 01:30:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:43.932 [ 00:10:43.932 { 00:10:43.932 "name": "Nvme0n1", 00:10:43.932 "aliases": [ 00:10:43.932 "ea82c92f-9a40-4db0-b18e-b81e5de75fcc" 00:10:43.932 ], 00:10:43.932 "product_name": "NVMe disk", 00:10:43.932 "block_size": 4096, 00:10:43.932 "num_blocks": 38912, 00:10:43.932 "uuid": "ea82c92f-9a40-4db0-b18e-b81e5de75fcc", 00:10:43.932 "numa_id": -1, 00:10:43.932 "assigned_rate_limits": { 00:10:43.932 "rw_ios_per_sec": 0, 00:10:43.932 "rw_mbytes_per_sec": 0, 00:10:43.932 "r_mbytes_per_sec": 0, 00:10:43.932 "w_mbytes_per_sec": 0 00:10:43.932 }, 00:10:43.932 "claimed": false, 00:10:43.932 "zoned": false, 00:10:43.932 "supported_io_types": { 00:10:43.932 "read": true, 00:10:43.932 "write": true, 00:10:43.932 "unmap": true, 00:10:43.932 "flush": true, 00:10:43.932 "reset": true, 00:10:43.932 "nvme_admin": true, 00:10:43.932 "nvme_io": true, 00:10:43.932 "nvme_io_md": false, 00:10:43.932 "write_zeroes": true, 00:10:43.932 "zcopy": false, 00:10:43.932 "get_zone_info": false, 00:10:43.932 "zone_management": false, 00:10:43.932 "zone_append": false, 00:10:43.932 "compare": true, 00:10:43.932 "compare_and_write": true, 00:10:43.932 "abort": true, 00:10:43.932 "seek_hole": false, 00:10:43.932 "seek_data": false, 00:10:43.932 "copy": true, 00:10:43.932 "nvme_iov_md": false 00:10:43.932 }, 00:10:43.932 "memory_domains": [ 00:10:43.932 { 00:10:43.932 "dma_device_id": "system", 00:10:43.932 "dma_device_type": 1 00:10:43.932 } 00:10:43.932 ], 00:10:43.932 "driver_specific": { 00:10:43.932 "nvme": [ 00:10:43.932 { 00:10:43.932 "trid": { 00:10:43.932 "trtype": "TCP", 00:10:43.932 "adrfam": "IPv4", 00:10:43.932 "traddr": "10.0.0.3", 00:10:43.932 "trsvcid": "4420", 00:10:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:43.932 }, 00:10:43.932 "ctrlr_data": { 00:10:43.932 "cntlid": 1, 00:10:43.932 "vendor_id": "0x8086", 00:10:43.932 "model_number": "SPDK bdev Controller", 00:10:43.932 "serial_number": "SPDK0", 00:10:43.932 "firmware_revision": "25.01", 00:10:43.932 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:43.932 "oacs": { 00:10:43.932 "security": 0, 00:10:43.932 "format": 0, 00:10:43.932 "firmware": 0, 00:10:43.932 "ns_manage": 0 00:10:43.932 }, 00:10:43.932 "multi_ctrlr": true, 00:10:43.932 "ana_reporting": false 00:10:43.932 }, 00:10:43.932 "vs": { 00:10:43.932 "nvme_version": "1.3" 00:10:43.932 }, 00:10:43.932 "ns_data": { 00:10:43.932 "id": 1, 00:10:43.932 "can_share": true 00:10:43.932 } 00:10:43.932 } 00:10:43.932 ], 00:10:43.932 "mp_policy": "active_passive" 00:10:43.932 } 00:10:43.932 } 00:10:43.932 ] 00:10:43.932 01:30:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65605 00:10:43.932 01:30:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:43.932 01:30:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:44.192 Running I/O for 10 seconds... 00:10:45.130 Latency(us) 00:10:45.130 [2024-11-17T01:30:53.589Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:45.130 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:45.130 Nvme0n1 : 1.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:10:45.130 [2024-11-17T01:30:53.589Z] =================================================================================================================== 00:10:45.130 [2024-11-17T01:30:53.589Z] Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:10:45.130 00:10:46.069 01:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:46.328 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:46.328 Nvme0n1 : 2.00 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:10:46.328 [2024-11-17T01:30:54.787Z] =================================================================================================================== 00:10:46.328 [2024-11-17T01:30:54.787Z] Total : 5461.00 21.33 0.00 0.00 0.00 0.00 0.00 00:10:46.328 00:10:46.328 true 00:10:46.328 01:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:46.328 01:30:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:46.587 01:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:46.587 01:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:46.587 01:30:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 65605 00:10:47.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:47.153 Nvme0n1 : 3.00 5389.33 21.05 0.00 0.00 0.00 0.00 0.00 00:10:47.153 [2024-11-17T01:30:55.612Z] =================================================================================================================== 00:10:47.153 [2024-11-17T01:30:55.612Z] Total : 5389.33 21.05 0.00 0.00 0.00 0.00 0.00 00:10:47.153 00:10:48.088 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:48.088 Nvme0n1 : 4.00 5375.50 21.00 0.00 0.00 0.00 0.00 0.00 00:10:48.088 [2024-11-17T01:30:56.547Z] =================================================================================================================== 00:10:48.088 [2024-11-17T01:30:56.547Z] Total : 5375.50 21.00 0.00 0.00 0.00 0.00 0.00 00:10:48.088 00:10:49.467 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:49.467 Nvme0n1 : 5.00 5367.20 20.97 0.00 0.00 0.00 0.00 0.00 00:10:49.467 [2024-11-17T01:30:57.926Z] =================================================================================================================== 00:10:49.467 [2024-11-17T01:30:57.926Z] Total : 5367.20 20.97 0.00 0.00 0.00 0.00 0.00 00:10:49.467 00:10:50.405 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:50.405 Nvme0n1 : 6.00 5340.50 20.86 0.00 0.00 0.00 0.00 0.00 00:10:50.405 [2024-11-17T01:30:58.864Z] =================================================================================================================== 00:10:50.405 [2024-11-17T01:30:58.864Z] Total : 5340.50 20.86 0.00 0.00 0.00 0.00 0.00 00:10:50.405 00:10:51.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:51.343 Nvme0n1 : 7.00 5321.43 20.79 0.00 0.00 0.00 0.00 0.00 00:10:51.343 [2024-11-17T01:30:59.802Z] =================================================================================================================== 00:10:51.343 [2024-11-17T01:30:59.802Z] Total : 5321.43 20.79 0.00 0.00 0.00 0.00 0.00 00:10:51.343 00:10:52.280 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:52.280 Nvme0n1 : 8.00 5323.00 20.79 0.00 0.00 0.00 0.00 0.00 00:10:52.280 [2024-11-17T01:31:00.739Z] =================================================================================================================== 00:10:52.280 [2024-11-17T01:31:00.739Z] Total : 5323.00 20.79 0.00 0.00 0.00 0.00 0.00 00:10:52.280 00:10:53.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:53.218 Nvme0n1 : 9.00 5310.11 20.74 0.00 0.00 0.00 0.00 0.00 00:10:53.218 [2024-11-17T01:31:01.677Z] =================================================================================================================== 00:10:53.218 [2024-11-17T01:31:01.677Z] Total : 5310.11 20.74 0.00 0.00 0.00 0.00 0.00 00:10:53.218 00:10:54.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.157 Nvme0n1 : 10.00 5287.10 20.65 0.00 0.00 0.00 0.00 0.00 00:10:54.157 [2024-11-17T01:31:02.616Z] =================================================================================================================== 00:10:54.157 [2024-11-17T01:31:02.616Z] Total : 5287.10 20.65 0.00 0.00 0.00 0.00 0.00 00:10:54.157 00:10:54.157 00:10:54.157 Latency(us) 00:10:54.157 [2024-11-17T01:31:02.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.157 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:54.157 Nvme0n1 : 10.00 5297.76 20.69 0.00 0.00 24153.90 16443.58 52428.80 00:10:54.157 [2024-11-17T01:31:02.616Z] =================================================================================================================== 00:10:54.157 [2024-11-17T01:31:02.616Z] Total : 5297.76 20.69 0.00 0.00 24153.90 16443.58 52428.80 00:10:54.157 { 00:10:54.157 "results": [ 00:10:54.157 { 00:10:54.157 "job": "Nvme0n1", 00:10:54.157 "core_mask": "0x2", 00:10:54.157 "workload": "randwrite", 00:10:54.157 "status": "finished", 00:10:54.157 "queue_depth": 128, 00:10:54.157 "io_size": 4096, 00:10:54.157 "runtime": 10.004033, 00:10:54.157 "iops": 5297.763412015934, 00:10:54.157 "mibps": 20.69438832818724, 00:10:54.157 "io_failed": 0, 00:10:54.157 "io_timeout": 0, 00:10:54.157 "avg_latency_us": 24153.902537989572, 00:10:54.157 "min_latency_us": 16443.578181818182, 00:10:54.157 "max_latency_us": 52428.8 00:10:54.157 } 00:10:54.157 ], 00:10:54.157 "core_count": 1 00:10:54.157 } 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65576 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 65576 ']' 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 65576 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65576 00:10:54.157 killing process with pid 65576 00:10:54.157 Received shutdown signal, test time was about 10.000000 seconds 00:10:54.157 00:10:54.157 Latency(us) 00:10:54.157 [2024-11-17T01:31:02.616Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:54.157 [2024-11-17T01:31:02.616Z] =================================================================================================================== 00:10:54.157 [2024-11-17T01:31:02.616Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65576' 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 65576 00:10:54.157 01:31:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 65576 00:10:55.096 01:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:10:55.355 01:31:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:55.614 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:55.614 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:55.873 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:55.873 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:10:55.873 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:56.132 [2024-11-17 01:31:04.494541] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:56.132 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:56.391 request: 00:10:56.391 { 00:10:56.391 "uuid": "d63d653c-9700-4244-986d-3d68ccced76d", 00:10:56.391 "method": "bdev_lvol_get_lvstores", 00:10:56.391 "req_id": 1 00:10:56.391 } 00:10:56.391 Got JSON-RPC error response 00:10:56.391 response: 00:10:56.391 { 00:10:56.391 "code": -19, 00:10:56.391 "message": "No such device" 00:10:56.391 } 00:10:56.392 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:56.392 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:56.392 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:56.392 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:56.392 01:31:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:56.651 aio_bdev 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ea82c92f-9a40-4db0-b18e-b81e5de75fcc 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=ea82c92f-9a40-4db0-b18e-b81e5de75fcc 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:56.651 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:56.910 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ea82c92f-9a40-4db0-b18e-b81e5de75fcc -t 2000 00:10:57.170 [ 00:10:57.170 { 00:10:57.170 "name": "ea82c92f-9a40-4db0-b18e-b81e5de75fcc", 00:10:57.170 "aliases": [ 00:10:57.170 "lvs/lvol" 00:10:57.170 ], 00:10:57.170 "product_name": "Logical Volume", 00:10:57.170 "block_size": 4096, 00:10:57.170 "num_blocks": 38912, 00:10:57.170 "uuid": "ea82c92f-9a40-4db0-b18e-b81e5de75fcc", 00:10:57.170 "assigned_rate_limits": { 00:10:57.170 "rw_ios_per_sec": 0, 00:10:57.170 "rw_mbytes_per_sec": 0, 00:10:57.170 "r_mbytes_per_sec": 0, 00:10:57.170 "w_mbytes_per_sec": 0 00:10:57.170 }, 00:10:57.170 "claimed": false, 00:10:57.170 "zoned": false, 00:10:57.170 "supported_io_types": { 00:10:57.170 "read": true, 00:10:57.170 "write": true, 00:10:57.170 "unmap": true, 00:10:57.170 "flush": false, 00:10:57.170 "reset": true, 00:10:57.170 "nvme_admin": false, 00:10:57.170 "nvme_io": false, 00:10:57.170 "nvme_io_md": false, 00:10:57.170 "write_zeroes": true, 00:10:57.170 "zcopy": false, 00:10:57.170 "get_zone_info": false, 00:10:57.170 "zone_management": false, 00:10:57.170 "zone_append": false, 00:10:57.170 "compare": false, 00:10:57.170 "compare_and_write": false, 00:10:57.170 "abort": false, 00:10:57.170 "seek_hole": true, 00:10:57.170 "seek_data": true, 00:10:57.170 "copy": false, 00:10:57.170 "nvme_iov_md": false 00:10:57.170 }, 00:10:57.170 "driver_specific": { 00:10:57.170 "lvol": { 00:10:57.170 "lvol_store_uuid": "d63d653c-9700-4244-986d-3d68ccced76d", 00:10:57.170 "base_bdev": "aio_bdev", 00:10:57.170 "thin_provision": false, 00:10:57.170 "num_allocated_clusters": 38, 00:10:57.170 "snapshot": false, 00:10:57.170 "clone": false, 00:10:57.170 "esnap_clone": false 00:10:57.170 } 00:10:57.170 } 00:10:57.170 } 00:10:57.170 ] 00:10:57.170 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:57.170 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:57.170 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:57.429 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:57.429 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:57.429 01:31:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:57.703 01:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:57.703 01:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete ea82c92f-9a40-4db0-b18e-b81e5de75fcc 00:10:57.992 01:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u d63d653c-9700-4244-986d-3d68ccced76d 00:10:58.251 01:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:58.510 01:31:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:58.769 ************************************ 00:10:58.769 END TEST lvs_grow_clean 00:10:58.769 ************************************ 00:10:58.769 00:10:58.769 real 0m19.132s 00:10:58.769 user 0m18.392s 00:10:58.769 sys 0m2.380s 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:58.769 ************************************ 00:10:58.769 START TEST lvs_grow_dirty 00:10:58.769 ************************************ 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:58.769 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:59.028 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:10:59.028 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:59.287 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:59.287 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:59.546 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=0a5be798-b9ca-4bac-a9f7-81b391323489 00:10:59.546 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:10:59.546 01:31:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:59.805 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:59.805 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:59.805 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0a5be798-b9ca-4bac-a9f7-81b391323489 lvol 150 00:11:00.064 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:00.064 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:00.064 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:00.322 [2024-11-17 01:31:08.689973] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:00.322 [2024-11-17 01:31:08.690098] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:00.322 true 00:11:00.323 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:00.323 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:00.581 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:00.581 01:31:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:00.840 01:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:01.098 01:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:11:01.356 [2024-11-17 01:31:09.726672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:01.356 01:31:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:01.616 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=65861 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 65861 /var/tmp/bdevperf.sock 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 65861 ']' 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:01.616 01:31:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:01.874 [2024-11-17 01:31:10.137918] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:01.875 [2024-11-17 01:31:10.138076] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65861 ] 00:11:01.875 [2024-11-17 01:31:10.315196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:02.133 [2024-11-17 01:31:10.451420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:02.393 [2024-11-17 01:31:10.624801] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:02.652 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:02.652 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:02.652 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:02.910 Nvme0n1 00:11:03.169 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:03.169 [ 00:11:03.169 { 00:11:03.169 "name": "Nvme0n1", 00:11:03.169 "aliases": [ 00:11:03.169 "c3c63025-7cac-4f89-b6ea-d7d312bf7e3a" 00:11:03.169 ], 00:11:03.169 "product_name": "NVMe disk", 00:11:03.169 "block_size": 4096, 00:11:03.169 "num_blocks": 38912, 00:11:03.169 "uuid": "c3c63025-7cac-4f89-b6ea-d7d312bf7e3a", 00:11:03.169 "numa_id": -1, 00:11:03.169 "assigned_rate_limits": { 00:11:03.169 "rw_ios_per_sec": 0, 00:11:03.169 "rw_mbytes_per_sec": 0, 00:11:03.169 "r_mbytes_per_sec": 0, 00:11:03.169 "w_mbytes_per_sec": 0 00:11:03.169 }, 00:11:03.169 "claimed": false, 00:11:03.169 "zoned": false, 00:11:03.169 "supported_io_types": { 00:11:03.169 "read": true, 00:11:03.169 "write": true, 00:11:03.169 "unmap": true, 00:11:03.169 "flush": true, 00:11:03.169 "reset": true, 00:11:03.169 "nvme_admin": true, 00:11:03.169 "nvme_io": true, 00:11:03.169 "nvme_io_md": false, 00:11:03.169 "write_zeroes": true, 00:11:03.169 "zcopy": false, 00:11:03.169 "get_zone_info": false, 00:11:03.169 "zone_management": false, 00:11:03.169 "zone_append": false, 00:11:03.169 "compare": true, 00:11:03.169 "compare_and_write": true, 00:11:03.169 "abort": true, 00:11:03.169 "seek_hole": false, 00:11:03.169 "seek_data": false, 00:11:03.169 "copy": true, 00:11:03.170 "nvme_iov_md": false 00:11:03.170 }, 00:11:03.170 "memory_domains": [ 00:11:03.170 { 00:11:03.170 "dma_device_id": "system", 00:11:03.170 "dma_device_type": 1 00:11:03.170 } 00:11:03.170 ], 00:11:03.170 "driver_specific": { 00:11:03.170 "nvme": [ 00:11:03.170 { 00:11:03.170 "trid": { 00:11:03.170 "trtype": "TCP", 00:11:03.170 "adrfam": "IPv4", 00:11:03.170 "traddr": "10.0.0.3", 00:11:03.170 "trsvcid": "4420", 00:11:03.170 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:03.170 }, 00:11:03.170 "ctrlr_data": { 00:11:03.170 "cntlid": 1, 00:11:03.170 "vendor_id": "0x8086", 00:11:03.170 "model_number": "SPDK bdev Controller", 00:11:03.170 "serial_number": "SPDK0", 00:11:03.170 "firmware_revision": "25.01", 00:11:03.170 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:03.170 "oacs": { 00:11:03.170 "security": 0, 00:11:03.170 "format": 0, 00:11:03.170 "firmware": 0, 00:11:03.170 "ns_manage": 0 00:11:03.170 }, 00:11:03.170 "multi_ctrlr": true, 00:11:03.170 "ana_reporting": false 00:11:03.170 }, 00:11:03.170 "vs": { 00:11:03.170 "nvme_version": "1.3" 00:11:03.170 }, 00:11:03.170 "ns_data": { 00:11:03.170 "id": 1, 00:11:03.170 "can_share": true 00:11:03.170 } 00:11:03.170 } 00:11:03.170 ], 00:11:03.170 "mp_policy": "active_passive" 00:11:03.170 } 00:11:03.170 } 00:11:03.170 ] 00:11:03.170 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:03.170 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=65885 00:11:03.170 01:31:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:03.429 Running I/O for 10 seconds... 00:11:04.365 Latency(us) 00:11:04.365 [2024-11-17T01:31:12.824Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:04.365 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.366 Nvme0n1 : 1.00 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:11:04.366 [2024-11-17T01:31:12.825Z] =================================================================================================================== 00:11:04.366 [2024-11-17T01:31:12.825Z] Total : 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:11:04.366 00:11:05.310 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:05.310 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.310 Nvme0n1 : 2.00 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:11:05.310 [2024-11-17T01:31:13.769Z] =================================================================================================================== 00:11:05.310 [2024-11-17T01:31:13.769Z] Total : 5334.00 20.84 0.00 0.00 0.00 0.00 0.00 00:11:05.310 00:11:05.569 true 00:11:05.569 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:05.569 01:31:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:05.829 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:05.829 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:05.829 01:31:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 65885 00:11:06.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.397 Nvme0n1 : 3.00 5376.33 21.00 0.00 0.00 0.00 0.00 0.00 00:11:06.397 [2024-11-17T01:31:14.856Z] =================================================================================================================== 00:11:06.397 [2024-11-17T01:31:14.856Z] Total : 5376.33 21.00 0.00 0.00 0.00 0.00 0.00 00:11:06.397 00:11:07.333 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:07.333 Nvme0n1 : 4.00 5365.75 20.96 0.00 0.00 0.00 0.00 0.00 00:11:07.333 [2024-11-17T01:31:15.792Z] =================================================================================================================== 00:11:07.333 [2024-11-17T01:31:15.792Z] Total : 5365.75 20.96 0.00 0.00 0.00 0.00 0.00 00:11:07.333 00:11:08.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:08.710 Nvme0n1 : 5.00 5198.60 20.31 0.00 0.00 0.00 0.00 0.00 00:11:08.710 [2024-11-17T01:31:17.169Z] =================================================================================================================== 00:11:08.710 [2024-11-17T01:31:17.169Z] Total : 5198.60 20.31 0.00 0.00 0.00 0.00 0.00 00:11:08.710 00:11:09.647 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:09.647 Nvme0n1 : 6.00 5242.33 20.48 0.00 0.00 0.00 0.00 0.00 00:11:09.647 [2024-11-17T01:31:18.106Z] =================================================================================================================== 00:11:09.647 [2024-11-17T01:31:18.106Z] Total : 5242.33 20.48 0.00 0.00 0.00 0.00 0.00 00:11:09.647 00:11:10.584 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:10.584 Nvme0n1 : 7.00 5237.29 20.46 0.00 0.00 0.00 0.00 0.00 00:11:10.584 [2024-11-17T01:31:19.043Z] =================================================================================================================== 00:11:10.584 [2024-11-17T01:31:19.043Z] Total : 5237.29 20.46 0.00 0.00 0.00 0.00 0.00 00:11:10.584 00:11:11.520 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:11.520 Nvme0n1 : 8.00 5249.38 20.51 0.00 0.00 0.00 0.00 0.00 00:11:11.520 [2024-11-17T01:31:19.979Z] =================================================================================================================== 00:11:11.520 [2024-11-17T01:31:19.979Z] Total : 5249.38 20.51 0.00 0.00 0.00 0.00 0.00 00:11:11.520 00:11:12.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:12.456 Nvme0n1 : 9.00 5244.67 20.49 0.00 0.00 0.00 0.00 0.00 00:11:12.456 [2024-11-17T01:31:20.915Z] =================================================================================================================== 00:11:12.456 [2024-11-17T01:31:20.915Z] Total : 5244.67 20.49 0.00 0.00 0.00 0.00 0.00 00:11:12.456 00:11:13.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.418 Nvme0n1 : 10.00 5240.90 20.47 0.00 0.00 0.00 0.00 0.00 00:11:13.418 [2024-11-17T01:31:21.877Z] =================================================================================================================== 00:11:13.419 [2024-11-17T01:31:21.878Z] Total : 5240.90 20.47 0.00 0.00 0.00 0.00 0.00 00:11:13.419 00:11:13.419 00:11:13.419 Latency(us) 00:11:13.419 [2024-11-17T01:31:21.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.419 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:13.419 Nvme0n1 : 10.02 5240.93 20.47 0.00 0.00 24417.49 4498.15 177304.67 00:11:13.419 [2024-11-17T01:31:21.878Z] =================================================================================================================== 00:11:13.419 [2024-11-17T01:31:21.878Z] Total : 5240.93 20.47 0.00 0.00 24417.49 4498.15 177304.67 00:11:13.419 { 00:11:13.419 "results": [ 00:11:13.419 { 00:11:13.419 "job": "Nvme0n1", 00:11:13.419 "core_mask": "0x2", 00:11:13.419 "workload": "randwrite", 00:11:13.419 "status": "finished", 00:11:13.419 "queue_depth": 128, 00:11:13.419 "io_size": 4096, 00:11:13.419 "runtime": 10.024372, 00:11:13.419 "iops": 5240.92681317094, 00:11:13.419 "mibps": 20.472370363948983, 00:11:13.419 "io_failed": 0, 00:11:13.419 "io_timeout": 0, 00:11:13.419 "avg_latency_us": 24417.488491954588, 00:11:13.419 "min_latency_us": 4498.152727272727, 00:11:13.419 "max_latency_us": 177304.6690909091 00:11:13.419 } 00:11:13.419 ], 00:11:13.419 "core_count": 1 00:11:13.419 } 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 65861 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 65861 ']' 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 65861 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65861 00:11:13.419 killing process with pid 65861 00:11:13.419 Received shutdown signal, test time was about 10.000000 seconds 00:11:13.419 00:11:13.419 Latency(us) 00:11:13.419 [2024-11-17T01:31:21.878Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:13.419 [2024-11-17T01:31:21.878Z] =================================================================================================================== 00:11:13.419 [2024-11-17T01:31:21.878Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65861' 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 65861 00:11:13.419 01:31:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 65861 00:11:14.357 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:11:14.616 01:31:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:14.876 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:14.876 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 65488 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 65488 00:11:15.135 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 65488 Killed "${NVMF_APP[@]}" "$@" 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:15.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=66025 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 66025 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 66025 ']' 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:15.135 01:31:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:15.394 [2024-11-17 01:31:23.681920] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:15.394 [2024-11-17 01:31:23.682283] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:15.394 [2024-11-17 01:31:23.846550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:15.653 [2024-11-17 01:31:23.929628] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:15.653 [2024-11-17 01:31:23.929993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:15.653 [2024-11-17 01:31:23.930026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:15.653 [2024-11-17 01:31:23.930051] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:15.654 [2024-11-17 01:31:23.930066] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:15.654 [2024-11-17 01:31:23.931188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:15.654 [2024-11-17 01:31:24.081868] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:16.222 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:16.481 [2024-11-17 01:31:24.849332] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:16.481 [2024-11-17 01:31:24.849897] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:16.481 [2024-11-17 01:31:24.850259] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.481 01:31:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:16.740 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3c63025-7cac-4f89-b6ea-d7d312bf7e3a -t 2000 00:11:16.999 [ 00:11:16.999 { 00:11:16.999 "name": "c3c63025-7cac-4f89-b6ea-d7d312bf7e3a", 00:11:16.999 "aliases": [ 00:11:16.999 "lvs/lvol" 00:11:16.999 ], 00:11:16.999 "product_name": "Logical Volume", 00:11:16.999 "block_size": 4096, 00:11:17.000 "num_blocks": 38912, 00:11:17.000 "uuid": "c3c63025-7cac-4f89-b6ea-d7d312bf7e3a", 00:11:17.000 "assigned_rate_limits": { 00:11:17.000 "rw_ios_per_sec": 0, 00:11:17.000 "rw_mbytes_per_sec": 0, 00:11:17.000 "r_mbytes_per_sec": 0, 00:11:17.000 "w_mbytes_per_sec": 0 00:11:17.000 }, 00:11:17.000 "claimed": false, 00:11:17.000 "zoned": false, 00:11:17.000 "supported_io_types": { 00:11:17.000 "read": true, 00:11:17.000 "write": true, 00:11:17.000 "unmap": true, 00:11:17.000 "flush": false, 00:11:17.000 "reset": true, 00:11:17.000 "nvme_admin": false, 00:11:17.000 "nvme_io": false, 00:11:17.000 "nvme_io_md": false, 00:11:17.000 "write_zeroes": true, 00:11:17.000 "zcopy": false, 00:11:17.000 "get_zone_info": false, 00:11:17.000 "zone_management": false, 00:11:17.000 "zone_append": false, 00:11:17.000 "compare": false, 00:11:17.000 "compare_and_write": false, 00:11:17.000 "abort": false, 00:11:17.000 "seek_hole": true, 00:11:17.000 "seek_data": true, 00:11:17.000 "copy": false, 00:11:17.000 "nvme_iov_md": false 00:11:17.000 }, 00:11:17.000 "driver_specific": { 00:11:17.000 "lvol": { 00:11:17.000 "lvol_store_uuid": "0a5be798-b9ca-4bac-a9f7-81b391323489", 00:11:17.000 "base_bdev": "aio_bdev", 00:11:17.000 "thin_provision": false, 00:11:17.000 "num_allocated_clusters": 38, 00:11:17.000 "snapshot": false, 00:11:17.000 "clone": false, 00:11:17.000 "esnap_clone": false 00:11:17.000 } 00:11:17.000 } 00:11:17.000 } 00:11:17.000 ] 00:11:17.000 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:17.000 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:17.000 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:17.259 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:17.259 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:17.259 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:17.518 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:17.518 01:31:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:17.778 [2024-11-17 01:31:26.171417] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:17.778 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:18.037 request: 00:11:18.037 { 00:11:18.037 "uuid": "0a5be798-b9ca-4bac-a9f7-81b391323489", 00:11:18.037 "method": "bdev_lvol_get_lvstores", 00:11:18.037 "req_id": 1 00:11:18.037 } 00:11:18.037 Got JSON-RPC error response 00:11:18.037 response: 00:11:18.037 { 00:11:18.037 "code": -19, 00:11:18.037 "message": "No such device" 00:11:18.037 } 00:11:18.297 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:11:18.297 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:18.297 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:18.297 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:18.297 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:18.297 aio_bdev 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:18.556 01:31:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:18.556 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b c3c63025-7cac-4f89-b6ea-d7d312bf7e3a -t 2000 00:11:18.815 [ 00:11:18.815 { 00:11:18.815 "name": "c3c63025-7cac-4f89-b6ea-d7d312bf7e3a", 00:11:18.815 "aliases": [ 00:11:18.815 "lvs/lvol" 00:11:18.815 ], 00:11:18.815 "product_name": "Logical Volume", 00:11:18.815 "block_size": 4096, 00:11:18.815 "num_blocks": 38912, 00:11:18.815 "uuid": "c3c63025-7cac-4f89-b6ea-d7d312bf7e3a", 00:11:18.815 "assigned_rate_limits": { 00:11:18.815 "rw_ios_per_sec": 0, 00:11:18.815 "rw_mbytes_per_sec": 0, 00:11:18.815 "r_mbytes_per_sec": 0, 00:11:18.815 "w_mbytes_per_sec": 0 00:11:18.815 }, 00:11:18.815 "claimed": false, 00:11:18.815 "zoned": false, 00:11:18.815 "supported_io_types": { 00:11:18.815 "read": true, 00:11:18.815 "write": true, 00:11:18.815 "unmap": true, 00:11:18.815 "flush": false, 00:11:18.815 "reset": true, 00:11:18.815 "nvme_admin": false, 00:11:18.815 "nvme_io": false, 00:11:18.816 "nvme_io_md": false, 00:11:18.816 "write_zeroes": true, 00:11:18.816 "zcopy": false, 00:11:18.816 "get_zone_info": false, 00:11:18.816 "zone_management": false, 00:11:18.816 "zone_append": false, 00:11:18.816 "compare": false, 00:11:18.816 "compare_and_write": false, 00:11:18.816 "abort": false, 00:11:18.816 "seek_hole": true, 00:11:18.816 "seek_data": true, 00:11:18.816 "copy": false, 00:11:18.816 "nvme_iov_md": false 00:11:18.816 }, 00:11:18.816 "driver_specific": { 00:11:18.816 "lvol": { 00:11:18.816 "lvol_store_uuid": "0a5be798-b9ca-4bac-a9f7-81b391323489", 00:11:18.816 "base_bdev": "aio_bdev", 00:11:18.816 "thin_provision": false, 00:11:18.816 "num_allocated_clusters": 38, 00:11:18.816 "snapshot": false, 00:11:18.816 "clone": false, 00:11:18.816 "esnap_clone": false 00:11:18.816 } 00:11:18.816 } 00:11:18.816 } 00:11:18.816 ] 00:11:18.816 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:11:18.816 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:18.816 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:19.384 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:19.384 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:19.384 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:19.384 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:19.384 01:31:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete c3c63025-7cac-4f89-b6ea-d7d312bf7e3a 00:11:19.643 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0a5be798-b9ca-4bac-a9f7-81b391323489 00:11:19.902 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:20.162 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:11:20.730 00:11:20.730 real 0m21.672s 00:11:20.730 user 0m44.839s 00:11:20.730 sys 0m9.711s 00:11:20.730 ************************************ 00:11:20.730 END TEST lvs_grow_dirty 00:11:20.730 ************************************ 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:20.730 nvmf_trace.0 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:20.730 01:31:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:20.730 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:20.731 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:20.731 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:20.731 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:20.989 rmmod nvme_tcp 00:11:20.989 rmmod nvme_fabrics 00:11:20.989 rmmod nvme_keyring 00:11:20.989 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:20.989 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:20.989 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:20.989 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 66025 ']' 00:11:20.989 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 66025 00:11:20.989 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 66025 ']' 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 66025 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66025 00:11:20.990 killing process with pid 66025 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66025' 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 66025 00:11:20.990 01:31:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 66025 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:21.926 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:21.927 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@300 -- # return 0 00:11:22.186 00:11:22.186 real 0m44.442s 00:11:22.186 user 1m10.498s 00:11:22.186 sys 0m13.051s 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.186 ************************************ 00:11:22.186 END TEST nvmf_lvs_grow 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:22.186 ************************************ 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:22.186 ************************************ 00:11:22.186 START TEST nvmf_bdev_io_wait 00:11:22.186 ************************************ 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:22.186 * Looking for test storage... 00:11:22.186 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:11:22.186 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:22.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.446 --rc genhtml_branch_coverage=1 00:11:22.446 --rc genhtml_function_coverage=1 00:11:22.446 --rc genhtml_legend=1 00:11:22.446 --rc geninfo_all_blocks=1 00:11:22.446 --rc geninfo_unexecuted_blocks=1 00:11:22.446 00:11:22.446 ' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:22.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.446 --rc genhtml_branch_coverage=1 00:11:22.446 --rc genhtml_function_coverage=1 00:11:22.446 --rc genhtml_legend=1 00:11:22.446 --rc geninfo_all_blocks=1 00:11:22.446 --rc geninfo_unexecuted_blocks=1 00:11:22.446 00:11:22.446 ' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:22.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.446 --rc genhtml_branch_coverage=1 00:11:22.446 --rc genhtml_function_coverage=1 00:11:22.446 --rc genhtml_legend=1 00:11:22.446 --rc geninfo_all_blocks=1 00:11:22.446 --rc geninfo_unexecuted_blocks=1 00:11:22.446 00:11:22.446 ' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:22.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.446 --rc genhtml_branch_coverage=1 00:11:22.446 --rc genhtml_function_coverage=1 00:11:22.446 --rc genhtml_legend=1 00:11:22.446 --rc geninfo_all_blocks=1 00:11:22.446 --rc geninfo_unexecuted_blocks=1 00:11:22.446 00:11:22.446 ' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.446 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.447 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:22.447 Cannot find device "nvmf_init_br" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:22.447 Cannot find device "nvmf_init_br2" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:22.447 Cannot find device "nvmf_tgt_br" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@164 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:22.447 Cannot find device "nvmf_tgt_br2" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@165 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:22.447 Cannot find device "nvmf_init_br" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@166 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:22.447 Cannot find device "nvmf_init_br2" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@167 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:22.447 Cannot find device "nvmf_tgt_br" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@168 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:22.447 Cannot find device "nvmf_tgt_br2" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:22.447 Cannot find device "nvmf_br" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@170 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:22.447 Cannot find device "nvmf_init_if" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:22.447 Cannot find device "nvmf_init_if2" 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:22.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@173 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:22.447 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@174 -- # true 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:22.447 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:22.448 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:22.448 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:22.448 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:22.707 01:31:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:22.707 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:22.707 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.059 ms 00:11:22.707 00:11:22.707 --- 10.0.0.3 ping statistics --- 00:11:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.707 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:22.707 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:22.707 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:11:22.707 00:11:22.707 --- 10.0.0.4 ping statistics --- 00:11:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.707 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:22.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:22.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:11:22.707 00:11:22.707 --- 10.0.0.1 ping statistics --- 00:11:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.707 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:22.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:22.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:11:22.707 00:11:22.707 --- 10.0.0.2 ping statistics --- 00:11:22.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:22.707 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@461 -- # return 0 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=66404 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 66404 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 66404 ']' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:22.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:22.707 01:31:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:22.967 [2024-11-17 01:31:31.217096] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:22.967 [2024-11-17 01:31:31.217266] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:22.967 [2024-11-17 01:31:31.401741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:23.225 [2024-11-17 01:31:31.500342] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:23.225 [2024-11-17 01:31:31.500418] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:23.225 [2024-11-17 01:31:31.500452] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:23.225 [2024-11-17 01:31:31.500463] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:23.225 [2024-11-17 01:31:31.500475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:23.225 [2024-11-17 01:31:31.502177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:23.225 [2024-11-17 01:31:31.502369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:23.225 [2024-11-17 01:31:31.502510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:23.225 [2024-11-17 01:31:31.502523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:23.792 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:23.792 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:11:23.792 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:23.792 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:23.792 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.793 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 [2024-11-17 01:31:32.358891] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 [2024-11-17 01:31:32.375539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 Malloc0 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:24.052 [2024-11-17 01:31:32.473953] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=66439 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=66441 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.052 { 00:11:24.052 "params": { 00:11:24.052 "name": "Nvme$subsystem", 00:11:24.052 "trtype": "$TEST_TRANSPORT", 00:11:24.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.052 "adrfam": "ipv4", 00:11:24.052 "trsvcid": "$NVMF_PORT", 00:11:24.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.052 "hdgst": ${hdgst:-false}, 00:11:24.052 "ddgst": ${ddgst:-false} 00:11:24.052 }, 00:11:24.052 "method": "bdev_nvme_attach_controller" 00:11:24.052 } 00:11:24.052 EOF 00:11:24.052 )") 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=66443 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.052 { 00:11:24.052 "params": { 00:11:24.052 "name": "Nvme$subsystem", 00:11:24.052 "trtype": "$TEST_TRANSPORT", 00:11:24.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.052 "adrfam": "ipv4", 00:11:24.052 "trsvcid": "$NVMF_PORT", 00:11:24.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.052 "hdgst": ${hdgst:-false}, 00:11:24.052 "ddgst": ${ddgst:-false} 00:11:24.052 }, 00:11:24.052 "method": "bdev_nvme_attach_controller" 00:11:24.052 } 00:11:24.052 EOF 00:11:24.052 )") 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=66446 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.052 { 00:11:24.052 "params": { 00:11:24.052 "name": "Nvme$subsystem", 00:11:24.052 "trtype": "$TEST_TRANSPORT", 00:11:24.052 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.052 "adrfam": "ipv4", 00:11:24.052 "trsvcid": "$NVMF_PORT", 00:11:24.052 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.052 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.052 "hdgst": ${hdgst:-false}, 00:11:24.052 "ddgst": ${ddgst:-false} 00:11:24.052 }, 00:11:24.052 "method": "bdev_nvme_attach_controller" 00:11:24.052 } 00:11:24.052 EOF 00:11:24.052 )") 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:24.052 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:24.052 { 00:11:24.052 "params": { 00:11:24.052 "name": "Nvme$subsystem", 00:11:24.052 "trtype": "$TEST_TRANSPORT", 00:11:24.053 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:24.053 "adrfam": "ipv4", 00:11:24.053 "trsvcid": "$NVMF_PORT", 00:11:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:24.053 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:24.053 "hdgst": ${hdgst:-false}, 00:11:24.053 "ddgst": ${ddgst:-false} 00:11:24.053 }, 00:11:24.053 "method": "bdev_nvme_attach_controller" 00:11:24.053 } 00:11:24.053 EOF 00:11:24.053 )") 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.053 "params": { 00:11:24.053 "name": "Nvme1", 00:11:24.053 "trtype": "tcp", 00:11:24.053 "traddr": "10.0.0.3", 00:11:24.053 "adrfam": "ipv4", 00:11:24.053 "trsvcid": "4420", 00:11:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.053 "hdgst": false, 00:11:24.053 "ddgst": false 00:11:24.053 }, 00:11:24.053 "method": "bdev_nvme_attach_controller" 00:11:24.053 }' 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.053 "params": { 00:11:24.053 "name": "Nvme1", 00:11:24.053 "trtype": "tcp", 00:11:24.053 "traddr": "10.0.0.3", 00:11:24.053 "adrfam": "ipv4", 00:11:24.053 "trsvcid": "4420", 00:11:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.053 "hdgst": false, 00:11:24.053 "ddgst": false 00:11:24.053 }, 00:11:24.053 "method": "bdev_nvme_attach_controller" 00:11:24.053 }' 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.053 "params": { 00:11:24.053 "name": "Nvme1", 00:11:24.053 "trtype": "tcp", 00:11:24.053 "traddr": "10.0.0.3", 00:11:24.053 "adrfam": "ipv4", 00:11:24.053 "trsvcid": "4420", 00:11:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.053 "hdgst": false, 00:11:24.053 "ddgst": false 00:11:24.053 }, 00:11:24.053 "method": "bdev_nvme_attach_controller" 00:11:24.053 }' 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:24.053 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:24.053 "params": { 00:11:24.053 "name": "Nvme1", 00:11:24.053 "trtype": "tcp", 00:11:24.053 "traddr": "10.0.0.3", 00:11:24.053 "adrfam": "ipv4", 00:11:24.053 "trsvcid": "4420", 00:11:24.053 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:24.053 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:24.053 "hdgst": false, 00:11:24.053 "ddgst": false 00:11:24.053 }, 00:11:24.053 "method": "bdev_nvme_attach_controller" 00:11:24.053 }' 00:11:24.312 01:31:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 66439 00:11:24.312 [2024-11-17 01:31:32.591992] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:24.312 [2024-11-17 01:31:32.592153] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:11:24.312 [2024-11-17 01:31:32.613445] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:24.312 [2024-11-17 01:31:32.613630] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:24.312 [2024-11-17 01:31:32.621107] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:24.312 [2024-11-17 01:31:32.621272] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:11:24.312 [2024-11-17 01:31:32.622478] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:24.312 [2024-11-17 01:31:32.622753] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:24.570 [2024-11-17 01:31:32.812241] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.570 [2024-11-17 01:31:32.859100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.570 [2024-11-17 01:31:32.904110] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.570 [2024-11-17 01:31:32.939414] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:24.570 [2024-11-17 01:31:32.955670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.570 [2024-11-17 01:31:32.976373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:24.570 [2024-11-17 01:31:32.999952] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:24.828 [2024-11-17 01:31:33.074719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:24.828 [2024-11-17 01:31:33.121428] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.828 [2024-11-17 01:31:33.155885] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.828 [2024-11-17 01:31:33.162876] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:24.828 [2024-11-17 01:31:33.258550] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:25.086 Running I/O for 1 seconds... 00:11:25.086 Running I/O for 1 seconds... 00:11:25.086 Running I/O for 1 seconds... 00:11:25.086 Running I/O for 1 seconds... 00:11:26.021 8811.00 IOPS, 34.42 MiB/s 00:11:26.021 Latency(us) 00:11:26.021 [2024-11-17T01:31:34.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.021 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:26.021 Nvme1n1 : 1.01 8864.50 34.63 0.00 0.00 14370.26 3798.11 20733.21 00:11:26.021 [2024-11-17T01:31:34.480Z] =================================================================================================================== 00:11:26.021 [2024-11-17T01:31:34.480Z] Total : 8864.50 34.63 0.00 0.00 14370.26 3798.11 20733.21 00:11:26.021 3747.00 IOPS, 14.64 MiB/s [2024-11-17T01:31:34.480Z] 141584.00 IOPS, 553.06 MiB/s 00:11:26.021 Latency(us) 00:11:26.021 [2024-11-17T01:31:34.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.021 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:26.021 Nvme1n1 : 1.00 141268.21 551.83 0.00 0.00 901.46 441.25 2219.29 00:11:26.021 [2024-11-17T01:31:34.480Z] =================================================================================================================== 00:11:26.021 [2024-11-17T01:31:34.480Z] Total : 141268.21 551.83 0.00 0.00 901.46 441.25 2219.29 00:11:26.021 00:11:26.021 Latency(us) 00:11:26.021 [2024-11-17T01:31:34.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.021 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:26.021 Nvme1n1 : 1.03 3762.10 14.70 0.00 0.00 33315.30 10187.87 58148.31 00:11:26.021 [2024-11-17T01:31:34.480Z] =================================================================================================================== 00:11:26.021 [2024-11-17T01:31:34.480Z] Total : 3762.10 14.70 0.00 0.00 33315.30 10187.87 58148.31 00:11:26.021 3661.00 IOPS, 14.30 MiB/s 00:11:26.021 Latency(us) 00:11:26.021 [2024-11-17T01:31:34.480Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:26.021 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:26.021 Nvme1n1 : 1.01 3745.10 14.63 0.00 0.00 33948.84 11319.85 62914.56 00:11:26.021 [2024-11-17T01:31:34.480Z] =================================================================================================================== 00:11:26.021 [2024-11-17T01:31:34.480Z] Total : 3745.10 14.63 0.00 0.00 33948.84 11319.85 62914.56 00:11:26.608 01:31:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 66441 00:11:26.608 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 66443 00:11:26.608 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 66446 00:11:26.608 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:26.608 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.608 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:26.876 rmmod nvme_tcp 00:11:26.876 rmmod nvme_fabrics 00:11:26.876 rmmod nvme_keyring 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 66404 ']' 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 66404 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 66404 ']' 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 66404 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66404 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:26.876 killing process with pid 66404 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66404' 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 66404 00:11:26.876 01:31:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 66404 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:27.811 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@300 -- # return 0 00:11:28.070 00:11:28.070 real 0m5.842s 00:11:28.070 user 0m24.962s 00:11:28.070 sys 0m2.557s 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:28.070 ************************************ 00:11:28.070 END TEST nvmf_bdev_io_wait 00:11:28.070 ************************************ 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:28.070 ************************************ 00:11:28.070 START TEST nvmf_queue_depth 00:11:28.070 ************************************ 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:28.070 * Looking for test storage... 00:11:28.070 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:28.070 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:28.071 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:28.071 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:28.071 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:28.071 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:28.330 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:28.330 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:28.330 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:28.330 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:28.330 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.331 --rc genhtml_branch_coverage=1 00:11:28.331 --rc genhtml_function_coverage=1 00:11:28.331 --rc genhtml_legend=1 00:11:28.331 --rc geninfo_all_blocks=1 00:11:28.331 --rc geninfo_unexecuted_blocks=1 00:11:28.331 00:11:28.331 ' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.331 --rc genhtml_branch_coverage=1 00:11:28.331 --rc genhtml_function_coverage=1 00:11:28.331 --rc genhtml_legend=1 00:11:28.331 --rc geninfo_all_blocks=1 00:11:28.331 --rc geninfo_unexecuted_blocks=1 00:11:28.331 00:11:28.331 ' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.331 --rc genhtml_branch_coverage=1 00:11:28.331 --rc genhtml_function_coverage=1 00:11:28.331 --rc genhtml_legend=1 00:11:28.331 --rc geninfo_all_blocks=1 00:11:28.331 --rc geninfo_unexecuted_blocks=1 00:11:28.331 00:11:28.331 ' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:28.331 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:28.331 --rc genhtml_branch_coverage=1 00:11:28.331 --rc genhtml_function_coverage=1 00:11:28.331 --rc genhtml_legend=1 00:11:28.331 --rc geninfo_all_blocks=1 00:11:28.331 --rc geninfo_unexecuted_blocks=1 00:11:28.331 00:11:28.331 ' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:28.331 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:28.331 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:28.332 Cannot find device "nvmf_init_br" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:28.332 Cannot find device "nvmf_init_br2" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:28.332 Cannot find device "nvmf_tgt_br" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@164 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:28.332 Cannot find device "nvmf_tgt_br2" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@165 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:28.332 Cannot find device "nvmf_init_br" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@166 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:28.332 Cannot find device "nvmf_init_br2" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@167 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:28.332 Cannot find device "nvmf_tgt_br" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@168 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:28.332 Cannot find device "nvmf_tgt_br2" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:28.332 Cannot find device "nvmf_br" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@170 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:28.332 Cannot find device "nvmf_init_if" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:28.332 Cannot find device "nvmf_init_if2" 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:28.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@173 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:28.332 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@174 -- # true 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:28.332 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:28.591 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:28.591 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.058 ms 00:11:28.591 00:11:28.591 --- 10.0.0.3 ping statistics --- 00:11:28.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.591 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:28.591 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:28.591 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:11:28.591 00:11:28.591 --- 10.0.0.4 ping statistics --- 00:11:28.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.591 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:28.591 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.591 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:11:28.591 00:11:28.591 --- 10.0.0.1 ping statistics --- 00:11:28.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.591 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:28.591 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.591 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:28.591 00:11:28.591 --- 10.0.0.2 ping statistics --- 00:11:28.591 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.591 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@461 -- # return 0 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=66750 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 66750 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 66750 ']' 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:28.591 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:28.592 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.592 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:28.592 01:31:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:28.851 [2024-11-17 01:31:37.087080] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:28.851 [2024-11-17 01:31:37.087240] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.851 [2024-11-17 01:31:37.269919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:29.110 [2024-11-17 01:31:37.360956] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.110 [2024-11-17 01:31:37.361025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.110 [2024-11-17 01:31:37.361059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.110 [2024-11-17 01:31:37.361080] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.110 [2024-11-17 01:31:37.361093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.110 [2024-11-17 01:31:37.362334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.110 [2024-11-17 01:31:37.518185] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:29.679 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.679 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:29.679 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.679 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.679 01:31:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.679 [2024-11-17 01:31:38.040264] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.679 Malloc0 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.679 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 [2024-11-17 01:31:38.142677] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=66782 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 66782 /var/tmp/bdevperf.sock 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 66782 ']' 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:29.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.939 01:31:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:29.939 [2024-11-17 01:31:38.233149] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:29.939 [2024-11-17 01:31:38.233318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66782 ] 00:11:30.199 [2024-11-17 01:31:38.405948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.199 [2024-11-17 01:31:38.532260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.458 [2024-11-17 01:31:38.715371] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:31.027 NVMe0n1 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.027 01:31:39 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:31.286 Running I/O for 10 seconds... 00:11:33.159 5881.00 IOPS, 22.97 MiB/s [2024-11-17T01:31:42.556Z] 6178.50 IOPS, 24.13 MiB/s [2024-11-17T01:31:43.936Z] 6405.67 IOPS, 25.02 MiB/s [2024-11-17T01:31:44.505Z] 6404.50 IOPS, 25.02 MiB/s [2024-11-17T01:31:45.882Z] 6453.20 IOPS, 25.21 MiB/s [2024-11-17T01:31:46.821Z] 6494.83 IOPS, 25.37 MiB/s [2024-11-17T01:31:47.758Z] 6525.43 IOPS, 25.49 MiB/s [2024-11-17T01:31:48.696Z] 6572.25 IOPS, 25.67 MiB/s [2024-11-17T01:31:49.633Z] 6671.11 IOPS, 26.06 MiB/s [2024-11-17T01:31:49.634Z] 6768.80 IOPS, 26.44 MiB/s 00:11:41.175 Latency(us) 00:11:41.175 [2024-11-17T01:31:49.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.175 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:41.175 Verification LBA range: start 0x0 length 0x4000 00:11:41.175 NVMe0n1 : 10.11 6794.10 26.54 0.00 0.00 149982.51 25618.62 109623.85 00:11:41.175 [2024-11-17T01:31:49.634Z] =================================================================================================================== 00:11:41.175 [2024-11-17T01:31:49.634Z] Total : 6794.10 26.54 0.00 0.00 149982.51 25618.62 109623.85 00:11:41.175 { 00:11:41.175 "results": [ 00:11:41.175 { 00:11:41.175 "job": "NVMe0n1", 00:11:41.175 "core_mask": "0x1", 00:11:41.175 "workload": "verify", 00:11:41.175 "status": "finished", 00:11:41.175 "verify_range": { 00:11:41.175 "start": 0, 00:11:41.175 "length": 16384 00:11:41.175 }, 00:11:41.175 "queue_depth": 1024, 00:11:41.175 "io_size": 4096, 00:11:41.175 "runtime": 10.111414, 00:11:41.175 "iops": 6794.104167824599, 00:11:41.175 "mibps": 26.53946940556484, 00:11:41.175 "io_failed": 0, 00:11:41.175 "io_timeout": 0, 00:11:41.175 "avg_latency_us": 149982.50673159733, 00:11:41.175 "min_latency_us": 25618.618181818183, 00:11:41.175 "max_latency_us": 109623.85454545454 00:11:41.175 } 00:11:41.175 ], 00:11:41.175 "core_count": 1 00:11:41.175 } 00:11:41.175 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 66782 00:11:41.175 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 66782 ']' 00:11:41.175 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 66782 00:11:41.175 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:41.175 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:41.434 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66782 00:11:41.434 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:41.434 killing process with pid 66782 00:11:41.434 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:41.434 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66782' 00:11:41.434 Received shutdown signal, test time was about 10.000000 seconds 00:11:41.434 00:11:41.434 Latency(us) 00:11:41.434 [2024-11-17T01:31:49.893Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:41.434 [2024-11-17T01:31:49.893Z] =================================================================================================================== 00:11:41.434 [2024-11-17T01:31:49.893Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:41.434 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 66782 00:11:41.434 01:31:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 66782 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:42.001 rmmod nvme_tcp 00:11:42.001 rmmod nvme_fabrics 00:11:42.001 rmmod nvme_keyring 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 66750 ']' 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 66750 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 66750 ']' 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 66750 00:11:42.001 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66750 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:42.260 killing process with pid 66750 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66750' 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 66750 00:11:42.260 01:31:50 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 66750 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@246 -- # remove_spdk_ns 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.248 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@300 -- # return 0 00:11:43.508 00:11:43.508 real 0m15.336s 00:11:43.508 user 0m25.765s 00:11:43.508 sys 0m2.368s 00:11:43.508 ************************************ 00:11:43.508 END TEST nvmf_queue_depth 00:11:43.508 ************************************ 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:43.508 ************************************ 00:11:43.508 START TEST nvmf_target_multipath 00:11:43.508 ************************************ 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:43.508 * Looking for test storage... 00:11:43.508 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.508 --rc genhtml_branch_coverage=1 00:11:43.508 --rc genhtml_function_coverage=1 00:11:43.508 --rc genhtml_legend=1 00:11:43.508 --rc geninfo_all_blocks=1 00:11:43.508 --rc geninfo_unexecuted_blocks=1 00:11:43.508 00:11:43.508 ' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.508 --rc genhtml_branch_coverage=1 00:11:43.508 --rc genhtml_function_coverage=1 00:11:43.508 --rc genhtml_legend=1 00:11:43.508 --rc geninfo_all_blocks=1 00:11:43.508 --rc geninfo_unexecuted_blocks=1 00:11:43.508 00:11:43.508 ' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.508 --rc genhtml_branch_coverage=1 00:11:43.508 --rc genhtml_function_coverage=1 00:11:43.508 --rc genhtml_legend=1 00:11:43.508 --rc geninfo_all_blocks=1 00:11:43.508 --rc geninfo_unexecuted_blocks=1 00:11:43.508 00:11:43.508 ' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:43.508 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:43.508 --rc genhtml_branch_coverage=1 00:11:43.508 --rc genhtml_function_coverage=1 00:11:43.508 --rc genhtml_legend=1 00:11:43.508 --rc geninfo_all_blocks=1 00:11:43.508 --rc geninfo_unexecuted_blocks=1 00:11:43.508 00:11:43.508 ' 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:43.508 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:43.767 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:43.768 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:11:43.768 Cannot find device "nvmf_init_br" 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # true 00:11:43.768 01:31:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:11:43.768 Cannot find device "nvmf_init_br2" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:11:43.768 Cannot find device "nvmf_tgt_br" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@164 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:11:43.768 Cannot find device "nvmf_tgt_br2" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@165 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:11:43.768 Cannot find device "nvmf_init_br" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@166 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:11:43.768 Cannot find device "nvmf_init_br2" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@167 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:11:43.768 Cannot find device "nvmf_tgt_br" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@168 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:11:43.768 Cannot find device "nvmf_tgt_br2" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:11:43.768 Cannot find device "nvmf_br" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@170 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:11:43.768 Cannot find device "nvmf_init_if" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:11:43.768 Cannot find device "nvmf_init_if2" 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:11:43.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@173 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:11:43.768 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@174 -- # true 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:11:43.768 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:11:44.028 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:11:44.029 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:11:44.029 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:11:44.029 00:11:44.029 --- 10.0.0.3 ping statistics --- 00:11:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.029 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:11:44.029 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:11:44.029 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:11:44.029 00:11:44.029 --- 10.0.0.4 ping statistics --- 00:11:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.029 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:11:44.029 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.029 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:11:44.029 00:11:44.029 --- 10.0.0.1 ping statistics --- 00:11:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.029 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:11:44.029 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.029 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:11:44.029 00:11:44.029 --- 10.0.0.2 ping statistics --- 00:11:44.029 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.029 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@461 -- # return 0 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z 10.0.0.4 ']' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:44.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@509 -- # nvmfpid=67174 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@510 -- # waitforlisten 67174 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@835 -- # '[' -z 67174 ']' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:44.029 01:31:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:44.288 [2024-11-17 01:31:52.530683] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:11:44.288 [2024-11-17 01:31:52.530882] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.288 [2024-11-17 01:31:52.720824] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.547 [2024-11-17 01:31:52.853681] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.547 [2024-11-17 01:31:52.853756] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.547 [2024-11-17 01:31:52.853786] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.547 [2024-11-17 01:31:52.853822] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.547 [2024-11-17 01:31:52.853840] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.547 [2024-11-17 01:31:52.856059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.547 [2024-11-17 01:31:52.856220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.547 [2024-11-17 01:31:52.856352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:44.547 [2024-11-17 01:31:52.856419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.805 [2024-11-17 01:31:53.057749] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@868 -- # return 0 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.373 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:45.373 [2024-11-17 01:31:53.798954] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.632 01:31:53 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:11:45.891 Malloc0 00:11:45.891 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:11:46.150 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:46.409 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:11:46.667 [2024-11-17 01:31:54.898446] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:11:46.667 01:31:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 00:11:46.926 [2024-11-17 01:31:55.146728] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.4 port 4420 *** 00:11:46.926 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:11:46.926 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.4 -s 4420 -g -G 00:11:47.185 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:11:47.186 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1202 -- # local i=0 00:11:47.186 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:47.186 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:47.186 01:31:55 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1209 -- # sleep 2 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1212 -- # return 0 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # echo nvme-subsys0 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@38 -- # return 0 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@85 -- # echo numa 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@88 -- # fio_pid=67269 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:49.091 01:31:57 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@90 -- # sleep 1 00:11:49.091 [global] 00:11:49.091 thread=1 00:11:49.091 invalidate=1 00:11:49.091 rw=randrw 00:11:49.091 time_based=1 00:11:49.091 runtime=6 00:11:49.091 ioengine=libaio 00:11:49.091 direct=1 00:11:49.091 bs=4096 00:11:49.091 iodepth=128 00:11:49.091 norandommap=0 00:11:49.091 numjobs=1 00:11:49.091 00:11:49.091 verify_dump=1 00:11:49.091 verify_backlog=512 00:11:49.091 verify_state_save=0 00:11:49.091 do_verify=1 00:11:49.091 verify=crc32c-intel 00:11:49.091 [job0] 00:11:49.091 filename=/dev/nvme0n1 00:11:49.091 Could not set queue depth (nvme0n1) 00:11:49.350 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:49.350 fio-3.35 00:11:49.350 Starting 1 thread 00:11:50.287 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:50.546 01:31:58 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:50.805 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:51.063 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:51.322 01:31:59 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@104 -- # wait 67269 00:11:55.514 00:11:55.514 job0: (groupid=0, jobs=1): err= 0: pid=67290: Sun Nov 17 01:32:03 2024 00:11:55.514 read: IOPS=8880, BW=34.7MiB/s (36.4MB/s)(208MiB/6003msec) 00:11:55.514 slat (usec): min=5, max=7375, avg=68.06, stdev=279.39 00:11:55.514 clat (usec): min=1417, max=19054, avg=9915.27, stdev=1837.81 00:11:55.514 lat (usec): min=1427, max=19062, avg=9983.33, stdev=1843.13 00:11:55.514 clat percentiles (usec): 00:11:55.514 | 1.00th=[ 4948], 5.00th=[ 7242], 10.00th=[ 8291], 20.00th=[ 8848], 00:11:55.514 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:11:55.514 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11731], 95.00th=[14091], 00:11:55.514 | 99.00th=[15664], 99.50th=[16319], 99.90th=[16909], 99.95th=[17171], 00:11:55.514 | 99.99th=[17695] 00:11:55.514 bw ( KiB/s): min= 3816, max=24392, per=50.32%, avg=17876.00, stdev=5680.79, samples=11 00:11:55.514 iops : min= 954, max= 6098, avg=4469.00, stdev=1420.20, samples=11 00:11:55.514 write: IOPS=5056, BW=19.8MiB/s (20.7MB/s)(106MiB/5370msec); 0 zone resets 00:11:55.514 slat (usec): min=11, max=3397, avg=76.54, stdev=201.53 00:11:55.514 clat (usec): min=1271, max=18374, avg=8572.97, stdev=1647.80 00:11:55.514 lat (usec): min=1297, max=18396, avg=8649.50, stdev=1652.84 00:11:55.514 clat percentiles (usec): 00:11:55.514 | 1.00th=[ 3752], 5.00th=[ 4948], 10.00th=[ 6390], 20.00th=[ 7898], 00:11:55.514 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:11:55.514 | 70.00th=[ 9241], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10421], 00:11:55.514 | 99.00th=[13435], 99.50th=[14353], 99.90th=[16188], 99.95th=[16909], 00:11:55.514 | 99.99th=[18220] 00:11:55.514 bw ( KiB/s): min= 4096, max=23904, per=88.30%, avg=17861.64, stdev=5599.09, samples=11 00:11:55.514 iops : min= 1024, max= 5976, avg=4465.36, stdev=1399.74, samples=11 00:11:55.514 lat (msec) : 2=0.02%, 4=0.69%, 10=68.71%, 20=30.58% 00:11:55.514 cpu : usr=5.01%, sys=18.34%, ctx=4633, majf=0, minf=90 00:11:55.514 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:11:55.514 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:55.514 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:55.514 issued rwts: total=53311,27156,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:55.514 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:55.514 00:11:55.514 Run status group 0 (all jobs): 00:11:55.514 READ: bw=34.7MiB/s (36.4MB/s), 34.7MiB/s-34.7MiB/s (36.4MB/s-36.4MB/s), io=208MiB (218MB), run=6003-6003msec 00:11:55.514 WRITE: bw=19.8MiB/s (20.7MB/s), 19.8MiB/s-19.8MiB/s (20.7MB/s-20.7MB/s), io=106MiB (111MB), run=5370-5370msec 00:11:55.514 00:11:55.514 Disk stats (read/write): 00:11:55.514 nvme0n1: ios=52547/26644, merge=0/0, ticks=501110/216085, in_queue=717195, util=98.63% 00:11:55.514 01:32:03 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:11:55.774 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n optimized 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@113 -- # echo round-robin 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@116 -- # fio_pid=67368 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:11:56.033 01:32:04 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@118 -- # sleep 1 00:11:56.033 [global] 00:11:56.033 thread=1 00:11:56.033 invalidate=1 00:11:56.033 rw=randrw 00:11:56.033 time_based=1 00:11:56.033 runtime=6 00:11:56.033 ioengine=libaio 00:11:56.033 direct=1 00:11:56.033 bs=4096 00:11:56.033 iodepth=128 00:11:56.033 norandommap=0 00:11:56.033 numjobs=1 00:11:56.033 00:11:56.033 verify_dump=1 00:11:56.033 verify_backlog=512 00:11:56.033 verify_state_save=0 00:11:56.033 do_verify=1 00:11:56.033 verify=crc32c-intel 00:11:56.033 [job0] 00:11:56.033 filename=/dev/nvme0n1 00:11:56.033 Could not set queue depth (nvme0n1) 00:11:56.292 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:56.292 fio-3.35 00:11:56.292 Starting 1 thread 00:11:57.230 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:11:57.489 01:32:05 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n non_optimized 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:57.748 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:11:58.008 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.4 -s 4420 -n inaccessible 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@22 -- # local timeout=20 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:11:58.267 01:32:06 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@132 -- # wait 67368 00:12:02.493 00:12:02.493 job0: (groupid=0, jobs=1): err= 0: pid=67395: Sun Nov 17 01:32:10 2024 00:12:02.493 read: IOPS=9517, BW=37.2MiB/s (39.0MB/s)(223MiB/6008msec) 00:12:02.493 slat (usec): min=2, max=7783, avg=52.30, stdev=240.69 00:12:02.493 clat (usec): min=397, max=29615, avg=9330.01, stdev=2422.88 00:12:02.493 lat (usec): min=416, max=29629, avg=9382.31, stdev=2441.86 00:12:02.493 clat percentiles (usec): 00:12:02.493 | 1.00th=[ 3359], 5.00th=[ 5145], 10.00th=[ 6128], 20.00th=[ 7308], 00:12:02.493 | 30.00th=[ 8455], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10028], 00:12:02.493 | 70.00th=[10421], 80.00th=[10814], 90.00th=[11600], 95.00th=[13042], 00:12:02.493 | 99.00th=[15926], 99.50th=[16450], 99.90th=[19530], 99.95th=[26870], 00:12:02.493 | 99.99th=[29492] 00:12:02.493 bw ( KiB/s): min=10032, max=30256, per=51.16%, avg=19478.00, stdev=6351.40, samples=12 00:12:02.493 iops : min= 2508, max= 7564, avg=4869.50, stdev=1587.85, samples=12 00:12:02.493 write: IOPS=5653, BW=22.1MiB/s (23.2MB/s)(115MiB/5194msec); 0 zone resets 00:12:02.493 slat (usec): min=4, max=14389, avg=64.79, stdev=205.07 00:12:02.493 clat (usec): min=682, max=28015, avg=7812.71, stdev=2459.66 00:12:02.493 lat (usec): min=721, max=28033, avg=7877.50, stdev=2482.42 00:12:02.493 clat percentiles (usec): 00:12:02.493 | 1.00th=[ 2933], 5.00th=[ 3916], 10.00th=[ 4490], 20.00th=[ 5276], 00:12:02.493 | 30.00th=[ 6128], 40.00th=[ 7570], 50.00th=[ 8455], 60.00th=[ 8979], 00:12:02.493 | 70.00th=[ 9372], 80.00th=[ 9765], 90.00th=[10290], 95.00th=[10683], 00:12:02.493 | 99.00th=[13960], 99.50th=[15008], 99.90th=[24773], 99.95th=[26608], 00:12:02.493 | 99.99th=[27919] 00:12:02.493 bw ( KiB/s): min=10288, max=31344, per=86.41%, avg=19540.00, stdev=6265.52, samples=12 00:12:02.493 iops : min= 2572, max= 7836, avg=4885.00, stdev=1566.38, samples=12 00:12:02.493 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.03% 00:12:02.493 lat (msec) : 2=0.22%, 4=2.70%, 10=65.33%, 20=31.57%, 50=0.13% 00:12:02.493 cpu : usr=5.29%, sys=19.11%, ctx=4914, majf=0, minf=108 00:12:02.493 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:12:02.493 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:02.493 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:02.493 issued rwts: total=57184,29364,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:02.493 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:02.493 00:12:02.493 Run status group 0 (all jobs): 00:12:02.493 READ: bw=37.2MiB/s (39.0MB/s), 37.2MiB/s-37.2MiB/s (39.0MB/s-39.0MB/s), io=223MiB (234MB), run=6008-6008msec 00:12:02.493 WRITE: bw=22.1MiB/s (23.2MB/s), 22.1MiB/s-22.1MiB/s (23.2MB/s-23.2MB/s), io=115MiB (120MB), run=5194-5194msec 00:12:02.493 00:12:02.493 Disk stats (read/write): 00:12:02.493 nvme0n1: ios=56427/28905, merge=0/0, ticks=505015/212725, in_queue=717740, util=98.56% 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:02.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1223 -- # local i=0 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1235 -- # return 0 00:12:02.493 01:32:10 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@144 -- # nvmftestfini 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:02.752 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:02.752 rmmod nvme_tcp 00:12:02.752 rmmod nvme_fabrics 00:12:03.011 rmmod nvme_keyring 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n 67174 ']' 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@518 -- # killprocess 67174 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@954 -- # '[' -z 67174 ']' 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@958 -- # kill -0 67174 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # uname 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:03.011 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67174 00:12:03.012 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:03.012 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:03.012 killing process with pid 67174 00:12:03.012 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67174' 00:12:03.012 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@973 -- # kill 67174 00:12:03.012 01:32:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@978 -- # wait 67174 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:03.949 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.208 ************************************ 00:12:04.208 END TEST nvmf_target_multipath 00:12:04.208 ************************************ 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@300 -- # return 0 00:12:04.208 00:12:04.208 real 0m20.821s 00:12:04.208 user 1m15.980s 00:12:04.208 sys 0m9.644s 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:04.208 ************************************ 00:12:04.208 START TEST nvmf_zcopy 00:12:04.208 ************************************ 00:12:04.208 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:04.468 * Looking for test storage... 00:12:04.468 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.468 --rc genhtml_branch_coverage=1 00:12:04.468 --rc genhtml_function_coverage=1 00:12:04.468 --rc genhtml_legend=1 00:12:04.468 --rc geninfo_all_blocks=1 00:12:04.468 --rc geninfo_unexecuted_blocks=1 00:12:04.468 00:12:04.468 ' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.468 --rc genhtml_branch_coverage=1 00:12:04.468 --rc genhtml_function_coverage=1 00:12:04.468 --rc genhtml_legend=1 00:12:04.468 --rc geninfo_all_blocks=1 00:12:04.468 --rc geninfo_unexecuted_blocks=1 00:12:04.468 00:12:04.468 ' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.468 --rc genhtml_branch_coverage=1 00:12:04.468 --rc genhtml_function_coverage=1 00:12:04.468 --rc genhtml_legend=1 00:12:04.468 --rc geninfo_all_blocks=1 00:12:04.468 --rc geninfo_unexecuted_blocks=1 00:12:04.468 00:12:04.468 ' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:04.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:04.468 --rc genhtml_branch_coverage=1 00:12:04.468 --rc genhtml_function_coverage=1 00:12:04.468 --rc genhtml_legend=1 00:12:04.468 --rc geninfo_all_blocks=1 00:12:04.468 --rc geninfo_unexecuted_blocks=1 00:12:04.468 00:12:04.468 ' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:04.468 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:04.469 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:04.469 Cannot find device "nvmf_init_br" 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # true 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:04.469 Cannot find device "nvmf_init_br2" 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # true 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:04.469 Cannot find device "nvmf_tgt_br" 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@164 -- # true 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:04.469 Cannot find device "nvmf_tgt_br2" 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@165 -- # true 00:12:04.469 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:04.728 Cannot find device "nvmf_init_br" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@166 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:04.728 Cannot find device "nvmf_init_br2" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@167 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:04.728 Cannot find device "nvmf_tgt_br" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@168 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:04.728 Cannot find device "nvmf_tgt_br2" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:04.728 Cannot find device "nvmf_br" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@170 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:04.728 Cannot find device "nvmf_init_if" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:04.728 Cannot find device "nvmf_init_if2" 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # true 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:04.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.728 01:32:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@173 -- # true 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:04.728 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@174 -- # true 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:04.728 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:04.729 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:04.987 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:04.987 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:04.987 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.050 ms 00:12:04.988 00:12:04.988 --- 10.0.0.3 ping statistics --- 00:12:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.988 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:04.988 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:04.988 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:12:04.988 00:12:04.988 --- 10.0.0.4 ping statistics --- 00:12:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.988 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:04.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:04.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:12:04.988 00:12:04.988 --- 10.0.0.1 ping statistics --- 00:12:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.988 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:04.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:04.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.043 ms 00:12:04.988 00:12:04.988 --- 10.0.0.2 ping statistics --- 00:12:04.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:04.988 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@461 -- # return 0 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=67709 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 67709 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 67709 ']' 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:04.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:04.988 01:32:13 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.988 [2024-11-17 01:32:13.430178] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:04.988 [2024-11-17 01:32:13.430345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:05.246 [2024-11-17 01:32:13.622137] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:05.505 [2024-11-17 01:32:13.748707] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:05.505 [2024-11-17 01:32:13.748787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:05.505 [2024-11-17 01:32:13.748851] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:05.505 [2024-11-17 01:32:13.748887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:05.505 [2024-11-17 01:32:13.748905] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:05.505 [2024-11-17 01:32:13.750331] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.764 [2024-11-17 01:32:13.972032] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.023 [2024-11-17 01:32:14.454558] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.023 [2024-11-17 01:32:14.470772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.023 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.283 malloc0 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:06.283 { 00:12:06.283 "params": { 00:12:06.283 "name": "Nvme$subsystem", 00:12:06.283 "trtype": "$TEST_TRANSPORT", 00:12:06.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:06.283 "adrfam": "ipv4", 00:12:06.283 "trsvcid": "$NVMF_PORT", 00:12:06.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:06.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:06.283 "hdgst": ${hdgst:-false}, 00:12:06.283 "ddgst": ${ddgst:-false} 00:12:06.283 }, 00:12:06.283 "method": "bdev_nvme_attach_controller" 00:12:06.283 } 00:12:06.283 EOF 00:12:06.283 )") 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:06.283 01:32:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:06.283 "params": { 00:12:06.283 "name": "Nvme1", 00:12:06.283 "trtype": "tcp", 00:12:06.283 "traddr": "10.0.0.3", 00:12:06.283 "adrfam": "ipv4", 00:12:06.283 "trsvcid": "4420", 00:12:06.283 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:06.283 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:06.283 "hdgst": false, 00:12:06.283 "ddgst": false 00:12:06.283 }, 00:12:06.283 "method": "bdev_nvme_attach_controller" 00:12:06.283 }' 00:12:06.283 [2024-11-17 01:32:14.644656] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:06.283 [2024-11-17 01:32:14.644864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67742 ] 00:12:06.542 [2024-11-17 01:32:14.830950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.542 [2024-11-17 01:32:14.959471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.801 [2024-11-17 01:32:15.152562] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:07.060 Running I/O for 10 seconds... 00:12:08.934 4871.00 IOPS, 38.05 MiB/s [2024-11-17T01:32:18.330Z] 4973.00 IOPS, 38.85 MiB/s [2024-11-17T01:32:19.707Z] 5012.33 IOPS, 39.16 MiB/s [2024-11-17T01:32:20.645Z] 5018.50 IOPS, 39.21 MiB/s [2024-11-17T01:32:21.583Z] 4994.80 IOPS, 39.02 MiB/s [2024-11-17T01:32:22.553Z] 5026.50 IOPS, 39.27 MiB/s [2024-11-17T01:32:23.497Z] 5042.14 IOPS, 39.39 MiB/s [2024-11-17T01:32:24.434Z] 5057.62 IOPS, 39.51 MiB/s [2024-11-17T01:32:25.372Z] 5076.00 IOPS, 39.66 MiB/s [2024-11-17T01:32:25.372Z] 5090.90 IOPS, 39.77 MiB/s 00:12:16.913 Latency(us) 00:12:16.913 [2024-11-17T01:32:25.372Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:16.913 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:16.913 Verification LBA range: start 0x0 length 0x1000 00:12:16.913 Nvme1n1 : 10.02 5095.02 39.80 0.00 0.00 25055.16 3619.37 32648.84 00:12:16.913 [2024-11-17T01:32:25.372Z] =================================================================================================================== 00:12:16.913 [2024-11-17T01:32:25.372Z] Total : 5095.02 39.80 0.00 0.00 25055.16 3619.37 32648.84 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=67876 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:17.850 { 00:12:17.850 "params": { 00:12:17.850 "name": "Nvme$subsystem", 00:12:17.850 "trtype": "$TEST_TRANSPORT", 00:12:17.850 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:17.850 "adrfam": "ipv4", 00:12:17.850 "trsvcid": "$NVMF_PORT", 00:12:17.850 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:17.850 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:17.850 "hdgst": ${hdgst:-false}, 00:12:17.850 "ddgst": ${ddgst:-false} 00:12:17.850 }, 00:12:17.850 "method": "bdev_nvme_attach_controller" 00:12:17.850 } 00:12:17.850 EOF 00:12:17.850 )") 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:17.850 [2024-11-17 01:32:26.208999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.850 [2024-11-17 01:32:26.209051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:17.850 01:32:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:17.850 "params": { 00:12:17.850 "name": "Nvme1", 00:12:17.850 "trtype": "tcp", 00:12:17.850 "traddr": "10.0.0.3", 00:12:17.850 "adrfam": "ipv4", 00:12:17.850 "trsvcid": "4420", 00:12:17.850 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:17.850 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:17.850 "hdgst": false, 00:12:17.850 "ddgst": false 00:12:17.850 }, 00:12:17.850 "method": "bdev_nvme_attach_controller" 00:12:17.850 }' 00:12:17.850 [2024-11-17 01:32:26.221006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.850 [2024-11-17 01:32:26.221069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.850 [2024-11-17 01:32:26.232956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.850 [2024-11-17 01:32:26.232996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.850 [2024-11-17 01:32:26.244924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.850 [2024-11-17 01:32:26.244966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.850 [2024-11-17 01:32:26.256944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.851 [2024-11-17 01:32:26.256998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.851 [2024-11-17 01:32:26.268938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.851 [2024-11-17 01:32:26.268995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.851 [2024-11-17 01:32:26.280937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.851 [2024-11-17 01:32:26.280988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.851 [2024-11-17 01:32:26.288917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.851 [2024-11-17 01:32:26.288971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.851 [2024-11-17 01:32:26.300961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.851 [2024-11-17 01:32:26.301013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.312939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.312996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.316190] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:18.110 [2024-11-17 01:32:26.316357] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67876 ] 00:12:18.110 [2024-11-17 01:32:26.324971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.325013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.336936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.336988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.348975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.349025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.360983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.361036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.372966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.373025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.384964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.385016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.396983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.397033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.408965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.409016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.420985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.421034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.433029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.433102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.445065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.445114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.456999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.457052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.468998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.469047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.481041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.481094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.489037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.489088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.494472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:18.110 [2024-11-17 01:32:26.501063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.501126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.110 [2024-11-17 01:32:26.509065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.110 [2024-11-17 01:32:26.509124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.517020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.517072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.525041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.525092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.533034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.533086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.541026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.541074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.549046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.549099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.557070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.557120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.111 [2024-11-17 01:32:26.565035] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.111 [2024-11-17 01:32:26.565087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.370 [2024-11-17 01:32:26.573062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.370 [2024-11-17 01:32:26.573110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.370 [2024-11-17 01:32:26.581092] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.370 [2024-11-17 01:32:26.581164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.587608] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:18.371 [2024-11-17 01:32:26.589084] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.589149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.597062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.597130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.605130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.605192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.613075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.613128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.625078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.625127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.633120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.633183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.641088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.641153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.649074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.649141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.657183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.657247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.665182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.665246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.673110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.673159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.681197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.681267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.689148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.689199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.697116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.697186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.709116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.709165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.717112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.717187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.725120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.725169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.733129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.733198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.741111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.741175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.753228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.753281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.761160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.761209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.761598] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:18.371 [2024-11-17 01:32:26.769182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.769247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.777238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.777306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.785221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.785285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.797206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.797260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.805166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.805217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.813158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.813206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.371 [2024-11-17 01:32:26.821163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.371 [2024-11-17 01:32:26.821214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.829223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.829271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.841158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.841209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.849176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.849225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.857161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.857214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.865222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.865279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.873212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.873281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.881184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.881240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.889227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.889278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.897235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.897291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.905223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.905275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.913246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.913301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.921237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.921287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.929281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.929338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.937260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.937312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 Running I/O for 5 seconds... 00:12:18.631 [2024-11-17 01:32:26.950460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.950516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.961416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.961474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.977620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.977675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:26.989530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:26.989587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:27.006117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:27.006198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:27.023222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:27.023298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:27.034488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:27.034543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:27.048794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:27.048868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:27.065243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:27.065297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.631 [2024-11-17 01:32:27.077289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.631 [2024-11-17 01:32:27.077349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.095997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.096050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.112391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.112449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.129458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.129530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.141297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.141354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.155956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.156010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.171882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.171939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.182681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.182735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.196334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.196402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.208857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.208909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.224411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.224488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.239786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.239887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.251291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.251390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.265365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.265420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.283557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.283619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.296162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.891 [2024-11-17 01:32:27.296217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.891 [2024-11-17 01:32:27.309580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.892 [2024-11-17 01:32:27.309642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.892 [2024-11-17 01:32:27.327073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.892 [2024-11-17 01:32:27.327137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.892 [2024-11-17 01:32:27.343959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.892 [2024-11-17 01:32:27.344052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.359243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.359298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.370616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.370673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.384954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.385008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.397536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.397593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.413539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.413593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.428763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.428866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.151 [2024-11-17 01:32:27.439266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.151 [2024-11-17 01:32:27.439319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.452469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.452526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.465653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.465707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.481343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.481400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.498381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.498435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.508822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.508890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.524264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.524318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.540477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.540536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.551746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.551797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.568384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.568442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.584674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.584728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.596028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.596086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.152 [2024-11-17 01:32:27.607114] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.152 [2024-11-17 01:32:27.607168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.620101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.620175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.632540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.632594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.648900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.648957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.665041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.665096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.675451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.675513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.688756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.688838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.700890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.700947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.717515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.717571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.733708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.733786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.746410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.746484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.764478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.764538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.776120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.776175] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.798555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.798613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.809873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.809938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.823091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.823149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.837790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.837871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.848937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.848995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.411 [2024-11-17 01:32:27.861896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.411 [2024-11-17 01:32:27.861949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.670 [2024-11-17 01:32:27.875785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.875871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.887926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.887982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.904982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.905042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.920581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.920639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.931010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.931068] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 9815.00 IOPS, 76.68 MiB/s [2024-11-17T01:32:28.130Z] [2024-11-17 01:32:27.944795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.944873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.958010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.958067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.973530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.973584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:27.990207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:27.990268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.001488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.001527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.015418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.015480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.032616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.032668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.043416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.043475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.059653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.059750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.075528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.075584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.086529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.086581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.100707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.100760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.671 [2024-11-17 01:32:28.113501] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.671 [2024-11-17 01:32:28.113555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.130779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.130863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.145892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.145948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.156662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.156722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.170302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.170357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.186494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.186548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.202279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.202333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.213195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.213250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.226821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.226889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.239725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.239838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.255559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.255618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.267096] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.267152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.280740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.280796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.295474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.295544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.311077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.311140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.322154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.322208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.335401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.335441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.351477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.351518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.367695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.367750] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.930 [2024-11-17 01:32:28.378700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.930 [2024-11-17 01:32:28.378738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.395516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.395559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.408306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.408368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.424184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.424226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.437262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.437301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.452524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.452584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.470754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.470850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.483407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.483447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.501294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.501355] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.513365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.513420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.531795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.531873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.548251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.548311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.564830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.564894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.576597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.576652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.590948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.591000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.602893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.602962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.618170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.618236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.634461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.634515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.188 [2024-11-17 01:32:28.645393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.188 [2024-11-17 01:32:28.645480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.662010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.662064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.678073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.678172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.689249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.689303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.702685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.702739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.718524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.718578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.732966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.733020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.743104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.743143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.760220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.760276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.775097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.775152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.788887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.788940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.801764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.801830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.817608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.817675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.832368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.832421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.843584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.843641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.854924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.855019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.871305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.871399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.887540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.887596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.449 [2024-11-17 01:32:28.898424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.449 [2024-11-17 01:32:28.898478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:28.913254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.913308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:28.925530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.925601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 9782.50 IOPS, 76.43 MiB/s [2024-11-17T01:32:29.168Z] [2024-11-17 01:32:28.942337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.942391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:28.953554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.953609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:28.965459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.965512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:28.977391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.977445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:28.995264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:28.995377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.010909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.010963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.021135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.021189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.034611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.034666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.050816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.050896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.065490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.065544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.077655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.077727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.095863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.095920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.111559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.111614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.122976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.123029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.134149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.134215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.149160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.709 [2024-11-17 01:32:29.149214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.709 [2024-11-17 01:32:29.164329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.710 [2024-11-17 01:32:29.164385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.176048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.176102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.187139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.187209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.199937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.200002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.214982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.215036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.226188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.226240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.241546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.241600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.253678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.253732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.269880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.269933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.286975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.287031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.298514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.298555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.312540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.312585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.330076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.330129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.342717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.342775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.360104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.360158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.372020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.372073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.385069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.385124] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.402431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.402484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.970 [2024-11-17 01:32:29.419392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.970 [2024-11-17 01:32:29.419434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.432146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.432216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.444911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.444962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.461090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.461146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.477966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.478021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.489022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.489076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.502011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.502064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.514261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.514315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.530606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.530646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.545706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.545762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.556937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.556991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.570985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.571039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.587101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.587169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.597432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.597487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.609922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.609975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.621880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.621949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.637836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.637890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.649734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.649787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.665889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.665961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.230 [2024-11-17 01:32:29.682202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.230 [2024-11-17 01:32:29.682263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.694723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.694777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.712982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.713036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.725123] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.725217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.741951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.742015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.756477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.756531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.767587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.767629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.781090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.781129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.797154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.797225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.808688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.808745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.819934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.819987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.832250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.832307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.847978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.848046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.864644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.864697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.875095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.875150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.889164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.889218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.901298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.901352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.917621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.917677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 [2024-11-17 01:32:29.934856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.934910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.490 9817.00 IOPS, 76.70 MiB/s [2024-11-17T01:32:29.949Z] [2024-11-17 01:32:29.946415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.490 [2024-11-17 01:32:29.946473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:29.961420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:29.961474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:29.976624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:29.976662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:29.988699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:29.988738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.002282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.002337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.017848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.017908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.035218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.035301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.050219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.050273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.061115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.061185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.074641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.074695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.086931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.086984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.105250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.105316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.121637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.121691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.133725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.133778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.145605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.145660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.157134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.157204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.169638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.169692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.186500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.186555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:21.750 [2024-11-17 01:32:30.197527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:21.750 [2024-11-17 01:32:30.197580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.214223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.214277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.228994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.229061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.239863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.239950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.253964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.254018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.267614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.267658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.286061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.286146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.297117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.297188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.310426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.310496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.326316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.326370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.341099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.341170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.351493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.351549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.364964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.365017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.377322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.377399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.393721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.393783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.409264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.409317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.420156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.420209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.433315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.433369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.445517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.445570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.010 [2024-11-17 01:32:30.462089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.010 [2024-11-17 01:32:30.462143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.477295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.477347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.488377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.488430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.502113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.502184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.514883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.514952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.530078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.530132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.541421] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.541475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.554663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.554716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.570664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.570719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.587218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.587288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.598297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.598368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.614337] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.614393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.627081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.627135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.644106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.644161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.655428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.655469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.669509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.669563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.685406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.685463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.701242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.701297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.270 [2024-11-17 01:32:30.713171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.270 [2024-11-17 01:32:30.713225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.732717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.732772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.747920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.747988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.758748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.758802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.773122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.773176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.785547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.785601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.802835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.802902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.815131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.815205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.833347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.833404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.845393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.845447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.862383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.862437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.878138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.878194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.888403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.888456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.901683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.901737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.913914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.913967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.931382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.931424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 9818.00 IOPS, 76.70 MiB/s [2024-11-17T01:32:30.988Z] [2024-11-17 01:32:30.945407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.945482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.959998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.960056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.529 [2024-11-17 01:32:30.973622] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.529 [2024-11-17 01:32:30.973677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:30.991099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:30.991151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.003104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.003157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.016435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.016489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.032983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.033038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.048193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.048244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.058485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.058537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.071870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.071938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.084100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.084153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.100395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.100447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.115673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.115743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.126254] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.126307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.144409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.144452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.160497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.160540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.171795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.172026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.184720] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.184908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.198544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.198796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.214471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.214660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.230957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.231136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:22.789 [2024-11-17 01:32:31.242319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:22.789 [2024-11-17 01:32:31.242531] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.255774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.255983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.270468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.270646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.285276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.285545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.295949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.296141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.309403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.309597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.322071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.322265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.337733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.338067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.353590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.353769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.365000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.365219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.378849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.379024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.391029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.391221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.407030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.407206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.424140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.424350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.435602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.435873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.449569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.449884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.465580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.465624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.476407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.476468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.048 [2024-11-17 01:32:31.491934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.048 [2024-11-17 01:32:31.491975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.509359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.509404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.520079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.520123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.533563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.533607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.546377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.546419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.563832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.564054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.579006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.579050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.590578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.590756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.605191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.605382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.617768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.617953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.634093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.634378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.646077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.646359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.663941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.664133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.680079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.680291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.695633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.695912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.708025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.708284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.726689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.726913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.742270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.742448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.306 [2024-11-17 01:32:31.753295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.306 [2024-11-17 01:32:31.753475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.768461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.768639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.786210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.786389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.797562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.797754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.812059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.812225] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.826749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.827026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.844057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.844237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.859502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.859803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.870517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.870697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.885156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.885352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.897143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.897339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.565 [2024-11-17 01:32:31.913197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.565 [2024-11-17 01:32:31.913470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.925277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.925455] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.938783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.939014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 9806.60 IOPS, 76.61 MiB/s [2024-11-17T01:32:32.025Z] [2024-11-17 01:32:31.950691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.950884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 00:12:23.566 Latency(us) 00:12:23.566 [2024-11-17T01:32:32.025Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:23.566 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:23.566 Nvme1n1 : 5.01 9809.26 76.63 0.00 0.00 13031.25 5213.09 22878.02 00:12:23.566 [2024-11-17T01:32:32.025Z] =================================================================================================================== 00:12:23.566 [2024-11-17T01:32:32.025Z] Total : 9809.26 76.63 0.00 0.00 13031.25 5213.09 22878.02 00:12:23.566 [2024-11-17 01:32:31.958694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.958882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.966701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.966887] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.974682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.974869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.982707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.982888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.990689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.990872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:31.998759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:31.999098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:32.006694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:32.006871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:32.014719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:32.014896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.566 [2024-11-17 01:32:32.022705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.566 [2024-11-17 01:32:32.022921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.030705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.030884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.038744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.039049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.046745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.046927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.054740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.054918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.062740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.062994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.070779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.071058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.078763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.078964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.086743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.086953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.094781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.095003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.102730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.102941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.110771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.110984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.118751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.118944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.126738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.126934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.134751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.134947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.142769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.143003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.150763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.150978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.158815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.159009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.825 [2024-11-17 01:32:32.166794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.825 [2024-11-17 01:32:32.167048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.178772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.178995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.186765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.187001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.198857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.199135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.206797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.207010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.214793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.214987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.222766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.222944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.230781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.230864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.238819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.238872] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.246905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.246963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.254864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.255151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.262782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.262983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.270849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.270886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:23.826 [2024-11-17 01:32:32.278847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:23.826 [2024-11-17 01:32:32.278918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.286835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.286945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.294817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.294880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.306817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.306879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.314856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.314889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.326853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.326903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.334825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.334873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.342905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.342941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.354870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.354907] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.362847] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.362885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.370866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.370904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.378866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.378905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.086 [2024-11-17 01:32:32.386877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.086 [2024-11-17 01:32:32.386929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.394885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.394922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.402856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.403050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.410860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.410897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.418887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.418923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.426860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.426895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.442883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.442919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.450868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.451058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.458888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.458925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.466896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.466948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.474871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.474906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.482901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.482943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.490933] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.490969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.498901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.499114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.506914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.506951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.514896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.514932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.522939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.522977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.530897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.530934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.087 [2024-11-17 01:32:32.538884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.087 [2024-11-17 01:32:32.538921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.546946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.547028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.558944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.559147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.566912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.566949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.574917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.574955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.582899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.582936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.590943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.591116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.602949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.602987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.610926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.610963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.618923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.619095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.626966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.627004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.634914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.634951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.647016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.647053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.654970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.655005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 [2024-11-17 01:32:32.666985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:24.347 [2024-11-17 01:32:32.667023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:24.347 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (67876) - No such process 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 67876 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:24.347 delay0 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.347 01:32:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 ns:1' 00:12:24.606 [2024-11-17 01:32:32.912565] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:31.199 Initializing NVMe Controllers 00:12:31.199 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:12:31.199 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:31.199 Initialization complete. Launching workers. 00:12:31.199 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 761 00:12:31.199 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 1048, failed to submit 33 00:12:31.199 success 939, unsuccessful 109, failed 0 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:31.199 rmmod nvme_tcp 00:12:31.199 rmmod nvme_fabrics 00:12:31.199 rmmod nvme_keyring 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 67709 ']' 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 67709 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 67709 ']' 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 67709 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67709 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:31.199 killing process with pid 67709 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67709' 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 67709 00:12:31.199 01:32:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 67709 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:31.767 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.027 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@300 -- # return 0 00:12:32.027 00:12:32.027 real 0m27.795s 00:12:32.027 user 0m45.095s 00:12:32.028 sys 0m7.136s 00:12:32.028 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:32.028 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:32.028 ************************************ 00:12:32.028 END TEST nvmf_zcopy 00:12:32.028 ************************************ 00:12:32.028 01:32:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:32.028 01:32:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:32.028 01:32:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:32.028 01:32:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:32.288 ************************************ 00:12:32.288 START TEST nvmf_nmic 00:12:32.288 ************************************ 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:32.288 * Looking for test storage... 00:12:32.288 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:32.288 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:32.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.288 --rc genhtml_branch_coverage=1 00:12:32.288 --rc genhtml_function_coverage=1 00:12:32.288 --rc genhtml_legend=1 00:12:32.288 --rc geninfo_all_blocks=1 00:12:32.288 --rc geninfo_unexecuted_blocks=1 00:12:32.288 00:12:32.288 ' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:32.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.289 --rc genhtml_branch_coverage=1 00:12:32.289 --rc genhtml_function_coverage=1 00:12:32.289 --rc genhtml_legend=1 00:12:32.289 --rc geninfo_all_blocks=1 00:12:32.289 --rc geninfo_unexecuted_blocks=1 00:12:32.289 00:12:32.289 ' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:32.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.289 --rc genhtml_branch_coverage=1 00:12:32.289 --rc genhtml_function_coverage=1 00:12:32.289 --rc genhtml_legend=1 00:12:32.289 --rc geninfo_all_blocks=1 00:12:32.289 --rc geninfo_unexecuted_blocks=1 00:12:32.289 00:12:32.289 ' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:32.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:32.289 --rc genhtml_branch_coverage=1 00:12:32.289 --rc genhtml_function_coverage=1 00:12:32.289 --rc genhtml_legend=1 00:12:32.289 --rc geninfo_all_blocks=1 00:12:32.289 --rc geninfo_unexecuted_blocks=1 00:12:32.289 00:12:32.289 ' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:32.289 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:32.289 Cannot find device "nvmf_init_br" 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # true 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:32.289 Cannot find device "nvmf_init_br2" 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # true 00:12:32.289 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:32.289 Cannot find device "nvmf_tgt_br" 00:12:32.290 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@164 -- # true 00:12:32.290 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:32.549 Cannot find device "nvmf_tgt_br2" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@165 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:32.549 Cannot find device "nvmf_init_br" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@166 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:32.549 Cannot find device "nvmf_init_br2" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@167 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:32.549 Cannot find device "nvmf_tgt_br" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@168 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:32.549 Cannot find device "nvmf_tgt_br2" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:32.549 Cannot find device "nvmf_br" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@170 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:32.549 Cannot find device "nvmf_init_if" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:32.549 Cannot find device "nvmf_init_if2" 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:32.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@173 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:32.549 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@174 -- # true 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:32.549 01:32:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:32.549 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:32.808 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:32.808 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:32.808 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:32.808 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:32.808 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:32.808 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:32.809 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:32.809 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.151 ms 00:12:32.809 00:12:32.809 --- 10.0.0.3 ping statistics --- 00:12:32.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.809 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:32.809 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:32.809 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.059 ms 00:12:32.809 00:12:32.809 --- 10.0.0.4 ping statistics --- 00:12:32.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.809 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:32.809 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.809 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:12:32.809 00:12:32.809 --- 10.0.0.1 ping statistics --- 00:12:32.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.809 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:32.809 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.809 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:12:32.809 00:12:32.809 --- 10.0.0.2 ping statistics --- 00:12:32.809 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.809 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@461 -- # return 0 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=68275 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 68275 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 68275 ']' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:32.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:32.809 01:32:41 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.809 [2024-11-17 01:32:41.212894] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:32.809 [2024-11-17 01:32:41.213073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:33.069 [2024-11-17 01:32:41.405680] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:33.328 [2024-11-17 01:32:41.538723] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:33.328 [2024-11-17 01:32:41.538850] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:33.328 [2024-11-17 01:32:41.538886] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:33.328 [2024-11-17 01:32:41.538902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:33.328 [2024-11-17 01:32:41.538919] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:33.328 [2024-11-17 01:32:41.541152] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:33.328 [2024-11-17 01:32:41.541270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:33.328 [2024-11-17 01:32:41.541410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:33.328 [2024-11-17 01:32:41.541426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:33.328 [2024-11-17 01:32:41.762187] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 [2024-11-17 01:32:42.162932] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 Malloc0 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 [2024-11-17 01:32:42.262188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 test case1: single bdev can't be used in multiple subsystems 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 [2024-11-17 01:32:42.285886] bdev.c:8198:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:33.897 [2024-11-17 01:32:42.285956] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:33.897 [2024-11-17 01:32:42.285994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:33.897 request: 00:12:33.897 { 00:12:33.897 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:33.897 "namespace": { 00:12:33.897 "bdev_name": "Malloc0", 00:12:33.897 "no_auto_visible": false 00:12:33.897 }, 00:12:33.897 "method": "nvmf_subsystem_add_ns", 00:12:33.897 "req_id": 1 00:12:33.897 } 00:12:33.897 Got JSON-RPC error response 00:12:33.897 response: 00:12:33.897 { 00:12:33.897 "code": -32602, 00:12:33.897 "message": "Invalid parameters" 00:12:33.897 } 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:33.897 Adding namespace failed - expected result. 00:12:33.897 test case2: host connect to nvmf target in multiple paths 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:33.897 [2024-11-17 01:32:42.298087] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.897 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:34.156 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4421 00:12:34.156 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.156 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.157 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.157 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.157 01:32:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:12:36.691 01:32:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:36.691 [global] 00:12:36.691 thread=1 00:12:36.691 invalidate=1 00:12:36.691 rw=write 00:12:36.691 time_based=1 00:12:36.691 runtime=1 00:12:36.691 ioengine=libaio 00:12:36.691 direct=1 00:12:36.691 bs=4096 00:12:36.691 iodepth=1 00:12:36.691 norandommap=0 00:12:36.691 numjobs=1 00:12:36.691 00:12:36.691 verify_dump=1 00:12:36.691 verify_backlog=512 00:12:36.691 verify_state_save=0 00:12:36.691 do_verify=1 00:12:36.691 verify=crc32c-intel 00:12:36.691 [job0] 00:12:36.691 filename=/dev/nvme0n1 00:12:36.691 Could not set queue depth (nvme0n1) 00:12:36.691 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:36.691 fio-3.35 00:12:36.691 Starting 1 thread 00:12:37.629 00:12:37.629 job0: (groupid=0, jobs=1): err= 0: pid=68367: Sun Nov 17 01:32:45 2024 00:12:37.629 read: IOPS=2229, BW=8919KiB/s (9133kB/s)(8928KiB/1001msec) 00:12:37.629 slat (nsec): min=10847, max=70489, avg=15531.59, stdev=5209.40 00:12:37.629 clat (usec): min=176, max=6918, avg=239.25, stdev=204.68 00:12:37.629 lat (usec): min=191, max=6933, avg=254.78, stdev=205.14 00:12:37.629 clat percentiles (usec): 00:12:37.629 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 198], 20.00th=[ 204], 00:12:37.629 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:12:37.629 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 262], 95.00th=[ 277], 00:12:37.629 | 99.00th=[ 322], 99.50th=[ 898], 99.90th=[ 3261], 99.95th=[ 3818], 00:12:37.629 | 99.99th=[ 6915] 00:12:37.629 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:37.629 slat (usec): min=16, max=159, avg=22.84, stdev= 7.10 00:12:37.629 clat (usec): min=112, max=499, avg=142.10, stdev=22.71 00:12:37.629 lat (usec): min=129, max=532, avg=164.94, stdev=25.32 00:12:37.629 clat percentiles (usec): 00:12:37.629 | 1.00th=[ 116], 5.00th=[ 120], 10.00th=[ 122], 20.00th=[ 126], 00:12:37.629 | 30.00th=[ 129], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 141], 00:12:37.629 | 70.00th=[ 149], 80.00th=[ 157], 90.00th=[ 169], 95.00th=[ 182], 00:12:37.629 | 99.00th=[ 212], 99.50th=[ 229], 99.90th=[ 289], 99.95th=[ 367], 00:12:37.629 | 99.99th=[ 498] 00:12:37.629 bw ( KiB/s): min=10304, max=10304, per=100.00%, avg=10304.00, stdev= 0.00, samples=1 00:12:37.629 iops : min= 2576, max= 2576, avg=2576.00, stdev= 0.00, samples=1 00:12:37.629 lat (usec) : 250=91.78%, 500=7.91%, 750=0.04%, 1000=0.06% 00:12:37.629 lat (msec) : 2=0.10%, 4=0.08%, 10=0.02% 00:12:37.629 cpu : usr=2.30%, sys=7.30%, ctx=4794, majf=0, minf=5 00:12:37.629 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:37.629 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.629 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:37.629 issued rwts: total=2232,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:37.629 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:37.629 00:12:37.629 Run status group 0 (all jobs): 00:12:37.629 READ: bw=8919KiB/s (9133kB/s), 8919KiB/s-8919KiB/s (9133kB/s-9133kB/s), io=8928KiB (9142kB), run=1001-1001msec 00:12:37.629 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:12:37.629 00:12:37.629 Disk stats (read/write): 00:12:37.629 nvme0n1: ios=2098/2196, merge=0/0, ticks=525/345, in_queue=870, util=91.18% 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.629 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.629 01:32:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.629 rmmod nvme_tcp 00:12:37.629 rmmod nvme_fabrics 00:12:37.629 rmmod nvme_keyring 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 68275 ']' 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 68275 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 68275 ']' 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 68275 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:37.629 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68275 00:12:37.888 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:37.888 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:37.888 killing process with pid 68275 00:12:37.888 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68275' 00:12:37.888 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 68275 00:12:37.888 01:32:46 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 68275 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:12:38.823 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@246 -- # remove_spdk_ns 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@300 -- # return 0 00:12:39.081 00:12:39.081 real 0m6.886s 00:12:39.081 user 0m20.480s 00:12:39.081 sys 0m2.554s 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.081 ************************************ 00:12:39.081 END TEST nvmf_nmic 00:12:39.081 ************************************ 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.081 ************************************ 00:12:39.081 START TEST nvmf_fio_target 00:12:39.081 ************************************ 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:39.081 * Looking for test storage... 00:12:39.081 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:12:39.081 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:39.340 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:39.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.341 --rc genhtml_branch_coverage=1 00:12:39.341 --rc genhtml_function_coverage=1 00:12:39.341 --rc genhtml_legend=1 00:12:39.341 --rc geninfo_all_blocks=1 00:12:39.341 --rc geninfo_unexecuted_blocks=1 00:12:39.341 00:12:39.341 ' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:39.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.341 --rc genhtml_branch_coverage=1 00:12:39.341 --rc genhtml_function_coverage=1 00:12:39.341 --rc genhtml_legend=1 00:12:39.341 --rc geninfo_all_blocks=1 00:12:39.341 --rc geninfo_unexecuted_blocks=1 00:12:39.341 00:12:39.341 ' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:39.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.341 --rc genhtml_branch_coverage=1 00:12:39.341 --rc genhtml_function_coverage=1 00:12:39.341 --rc genhtml_legend=1 00:12:39.341 --rc geninfo_all_blocks=1 00:12:39.341 --rc geninfo_unexecuted_blocks=1 00:12:39.341 00:12:39.341 ' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:39.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.341 --rc genhtml_branch_coverage=1 00:12:39.341 --rc genhtml_function_coverage=1 00:12:39.341 --rc genhtml_legend=1 00:12:39.341 --rc geninfo_all_blocks=1 00:12:39.341 --rc geninfo_unexecuted_blocks=1 00:12:39.341 00:12:39.341 ' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.341 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.341 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:12:39.342 Cannot find device "nvmf_init_br" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:12:39.342 Cannot find device "nvmf_init_br2" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:12:39.342 Cannot find device "nvmf_tgt_br" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@164 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:12:39.342 Cannot find device "nvmf_tgt_br2" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@165 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:12:39.342 Cannot find device "nvmf_init_br" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@166 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:12:39.342 Cannot find device "nvmf_init_br2" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@167 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:12:39.342 Cannot find device "nvmf_tgt_br" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@168 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:12:39.342 Cannot find device "nvmf_tgt_br2" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:12:39.342 Cannot find device "nvmf_br" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@170 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:12:39.342 Cannot find device "nvmf_init_if" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:12:39.342 Cannot find device "nvmf_init_if2" 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:12:39.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@173 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:12:39.342 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@174 -- # true 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:12:39.342 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:12:39.602 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:12:39.602 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:12:39.603 01:32:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:12:39.603 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:12:39.603 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.080 ms 00:12:39.603 00:12:39.603 --- 10.0.0.3 ping statistics --- 00:12:39.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.603 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:12:39.603 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:12:39.603 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:12:39.603 00:12:39.603 --- 10.0.0.4 ping statistics --- 00:12:39.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.603 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:12:39.603 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:39.603 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:12:39.603 00:12:39.603 --- 10.0.0.1 ping statistics --- 00:12:39.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.603 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:12:39.603 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:39.603 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:12:39.603 00:12:39.603 --- 10.0.0.2 ping statistics --- 00:12:39.603 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:39.603 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@461 -- # return 0 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=68606 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 68606 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 68606 ']' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:39.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:39.603 01:32:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:39.862 [2024-11-17 01:32:48.174746] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:12:39.862 [2024-11-17 01:32:48.175520] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:40.122 [2024-11-17 01:32:48.365113] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:40.122 [2024-11-17 01:32:48.454171] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:40.122 [2024-11-17 01:32:48.454250] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:40.122 [2024-11-17 01:32:48.454269] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:40.122 [2024-11-17 01:32:48.454280] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:40.122 [2024-11-17 01:32:48.454307] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:40.122 [2024-11-17 01:32:48.456209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:40.122 [2024-11-17 01:32:48.456368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:40.122 [2024-11-17 01:32:48.456445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:40.122 [2024-11-17 01:32:48.456462] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:40.380 [2024-11-17 01:32:48.630818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:40.947 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:41.205 [2024-11-17 01:32:49.465504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:41.205 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:41.464 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:41.464 01:32:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:42.032 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:42.032 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:42.291 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:42.291 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:42.551 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:42.551 01:32:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:42.811 01:32:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.070 01:32:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:43.070 01:32:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.330 01:32:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:43.589 01:32:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.849 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:43.849 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:44.111 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:44.370 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:44.370 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:44.629 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:44.630 01:32:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:44.888 01:32:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:12:45.147 [2024-11-17 01:32:53.378972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:12:45.147 01:32:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:45.406 01:32:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:45.664 01:32:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:12:45.664 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:45.664 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:45.664 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:45.664 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:45.664 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:45.664 01:32:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:48.199 01:32:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:48.199 [global] 00:12:48.199 thread=1 00:12:48.199 invalidate=1 00:12:48.199 rw=write 00:12:48.199 time_based=1 00:12:48.199 runtime=1 00:12:48.199 ioengine=libaio 00:12:48.199 direct=1 00:12:48.199 bs=4096 00:12:48.199 iodepth=1 00:12:48.199 norandommap=0 00:12:48.199 numjobs=1 00:12:48.199 00:12:48.199 verify_dump=1 00:12:48.199 verify_backlog=512 00:12:48.199 verify_state_save=0 00:12:48.199 do_verify=1 00:12:48.199 verify=crc32c-intel 00:12:48.199 [job0] 00:12:48.199 filename=/dev/nvme0n1 00:12:48.199 [job1] 00:12:48.199 filename=/dev/nvme0n2 00:12:48.199 [job2] 00:12:48.199 filename=/dev/nvme0n3 00:12:48.199 [job3] 00:12:48.199 filename=/dev/nvme0n4 00:12:48.199 Could not set queue depth (nvme0n1) 00:12:48.199 Could not set queue depth (nvme0n2) 00:12:48.199 Could not set queue depth (nvme0n3) 00:12:48.199 Could not set queue depth (nvme0n4) 00:12:48.199 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:48.199 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:48.199 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:48.199 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:48.199 fio-3.35 00:12:48.199 Starting 4 threads 00:12:49.137 00:12:49.137 job0: (groupid=0, jobs=1): err= 0: pid=68797: Sun Nov 17 01:32:57 2024 00:12:49.137 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:49.137 slat (nsec): min=11299, max=49391, avg=13557.74, stdev=3812.07 00:12:49.137 clat (usec): min=162, max=258, avg=194.29, stdev=15.83 00:12:49.137 lat (usec): min=174, max=272, avg=207.85, stdev=16.14 00:12:49.137 clat percentiles (usec): 00:12:49.137 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:12:49.137 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:12:49.137 | 70.00th=[ 202], 80.00th=[ 208], 90.00th=[ 217], 95.00th=[ 225], 00:12:49.137 | 99.00th=[ 239], 99.50th=[ 245], 99.90th=[ 255], 99.95th=[ 255], 00:12:49.137 | 99.99th=[ 260] 00:12:49.137 write: IOPS=2720, BW=10.6MiB/s (11.1MB/s)(10.6MiB/1001msec); 0 zone resets 00:12:49.137 slat (nsec): min=13735, max=78964, avg=20808.93, stdev=5083.43 00:12:49.137 clat (usec): min=115, max=1834, avg=147.67, stdev=36.31 00:12:49.137 lat (usec): min=132, max=1855, avg=168.48, stdev=36.81 00:12:49.137 clat percentiles (usec): 00:12:49.137 | 1.00th=[ 121], 5.00th=[ 125], 10.00th=[ 128], 20.00th=[ 133], 00:12:49.137 | 30.00th=[ 139], 40.00th=[ 143], 50.00th=[ 147], 60.00th=[ 151], 00:12:49.137 | 70.00th=[ 153], 80.00th=[ 159], 90.00th=[ 167], 95.00th=[ 176], 00:12:49.137 | 99.00th=[ 188], 99.50th=[ 194], 99.90th=[ 351], 99.95th=[ 383], 00:12:49.137 | 99.99th=[ 1827] 00:12:49.137 bw ( KiB/s): min=12288, max=12288, per=29.56%, avg=12288.00, stdev= 0.00, samples=1 00:12:49.137 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:49.137 lat (usec) : 250=99.85%, 500=0.13% 00:12:49.137 lat (msec) : 2=0.02% 00:12:49.137 cpu : usr=1.50%, sys=8.00%, ctx=5284, majf=0, minf=5 00:12:49.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:49.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.137 issued rwts: total=2560,2723,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:49.137 job1: (groupid=0, jobs=1): err= 0: pid=68798: Sun Nov 17 01:32:57 2024 00:12:49.137 read: IOPS=2438, BW=9754KiB/s (9988kB/s)(9764KiB/1001msec) 00:12:49.137 slat (nsec): min=11423, max=53294, avg=14255.08, stdev=3696.38 00:12:49.137 clat (usec): min=166, max=274, avg=203.61, stdev=18.06 00:12:49.137 lat (usec): min=178, max=289, avg=217.87, stdev=18.49 00:12:49.137 clat percentiles (usec): 00:12:49.137 | 1.00th=[ 174], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 188], 00:12:49.137 | 30.00th=[ 194], 40.00th=[ 198], 50.00th=[ 202], 60.00th=[ 206], 00:12:49.137 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 229], 95.00th=[ 237], 00:12:49.137 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 269], 99.95th=[ 273], 00:12:49.137 | 99.99th=[ 273] 00:12:49.137 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:49.137 slat (nsec): min=14923, max=77824, avg=21985.06, stdev=5130.25 00:12:49.137 clat (usec): min=124, max=596, avg=157.50, stdev=19.32 00:12:49.137 lat (usec): min=143, max=645, avg=179.49, stdev=20.00 00:12:49.137 clat percentiles (usec): 00:12:49.137 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 145], 00:12:49.137 | 30.00th=[ 149], 40.00th=[ 151], 50.00th=[ 155], 60.00th=[ 157], 00:12:49.137 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 180], 95.00th=[ 186], 00:12:49.137 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 302], 99.95th=[ 506], 00:12:49.137 | 99.99th=[ 594] 00:12:49.137 bw ( KiB/s): min=11944, max=11944, per=28.73%, avg=11944.00, stdev= 0.00, samples=1 00:12:49.137 iops : min= 2986, max= 2986, avg=2986.00, stdev= 0.00, samples=1 00:12:49.137 lat (usec) : 250=99.42%, 500=0.54%, 750=0.04% 00:12:49.137 cpu : usr=1.90%, sys=7.40%, ctx=5001, majf=0, minf=7 00:12:49.137 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:49.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.137 issued rwts: total=2441,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.137 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:49.137 job2: (groupid=0, jobs=1): err= 0: pid=68799: Sun Nov 17 01:32:57 2024 00:12:49.137 read: IOPS=2352, BW=9411KiB/s (9636kB/s)(9420KiB/1001msec) 00:12:49.137 slat (nsec): min=11810, max=52351, avg=13881.15, stdev=3160.55 00:12:49.138 clat (usec): min=175, max=542, avg=206.25, stdev=17.80 00:12:49.138 lat (usec): min=188, max=555, avg=220.13, stdev=18.06 00:12:49.138 clat percentiles (usec): 00:12:49.138 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 192], 00:12:49.138 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:12:49.138 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 229], 95.00th=[ 237], 00:12:49.138 | 99.00th=[ 253], 99.50th=[ 258], 99.90th=[ 265], 99.95th=[ 273], 00:12:49.138 | 99.99th=[ 545] 00:12:49.138 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:49.138 slat (nsec): min=14345, max=89705, avg=21887.19, stdev=5197.82 00:12:49.138 clat (usec): min=125, max=652, avg=162.89, stdev=20.43 00:12:49.138 lat (usec): min=144, max=673, avg=184.78, stdev=21.02 00:12:49.138 clat percentiles (usec): 00:12:49.138 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 151], 00:12:49.138 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:12:49.138 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 184], 95.00th=[ 192], 00:12:49.138 | 99.00th=[ 208], 99.50th=[ 219], 99.90th=[ 334], 99.95th=[ 523], 00:12:49.138 | 99.99th=[ 652] 00:12:49.138 bw ( KiB/s): min=11440, max=11440, per=27.52%, avg=11440.00, stdev= 0.00, samples=1 00:12:49.138 iops : min= 2860, max= 2860, avg=2860.00, stdev= 0.00, samples=1 00:12:49.138 lat (usec) : 250=99.23%, 500=0.71%, 750=0.06% 00:12:49.138 cpu : usr=1.90%, sys=7.20%, ctx=4915, majf=0, minf=14 00:12:49.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:49.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.138 issued rwts: total=2355,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:49.138 job3: (groupid=0, jobs=1): err= 0: pid=68800: Sun Nov 17 01:32:57 2024 00:12:49.138 read: IOPS=2375, BW=9502KiB/s (9731kB/s)(9512KiB/1001msec) 00:12:49.138 slat (nsec): min=11048, max=64237, avg=13693.06, stdev=4188.62 00:12:49.138 clat (usec): min=173, max=383, avg=205.70, stdev=15.91 00:12:49.138 lat (usec): min=185, max=396, avg=219.40, stdev=16.68 00:12:49.138 clat percentiles (usec): 00:12:49.138 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 192], 00:12:49.138 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 204], 60.00th=[ 208], 00:12:49.138 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 227], 95.00th=[ 235], 00:12:49.138 | 99.00th=[ 247], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 273], 00:12:49.138 | 99.99th=[ 383] 00:12:49.138 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:49.138 slat (nsec): min=14353, max=90534, avg=20422.35, stdev=4718.47 00:12:49.138 clat (usec): min=112, max=2076, avg=163.11, stdev=49.56 00:12:49.138 lat (usec): min=144, max=2098, avg=183.54, stdev=49.98 00:12:49.138 clat percentiles (usec): 00:12:49.138 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 141], 20.00th=[ 147], 00:12:49.138 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 159], 60.00th=[ 163], 00:12:49.138 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 194], 00:12:49.138 | 99.00th=[ 219], 99.50th=[ 289], 99.90th=[ 914], 99.95th=[ 1057], 00:12:49.138 | 99.99th=[ 2073] 00:12:49.138 bw ( KiB/s): min=11640, max=11640, per=28.00%, avg=11640.00, stdev= 0.00, samples=1 00:12:49.138 iops : min= 2910, max= 2910, avg=2910.00, stdev= 0.00, samples=1 00:12:49.138 lat (usec) : 250=99.33%, 500=0.59%, 750=0.02%, 1000=0.02% 00:12:49.138 lat (msec) : 2=0.02%, 4=0.02% 00:12:49.138 cpu : usr=2.70%, sys=6.10%, ctx=4939, majf=0, minf=11 00:12:49.138 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:49.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:49.138 issued rwts: total=2378,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:49.138 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:49.138 00:12:49.138 Run status group 0 (all jobs): 00:12:49.138 READ: bw=38.0MiB/s (39.8MB/s), 9411KiB/s-9.99MiB/s (9636kB/s-10.5MB/s), io=38.0MiB (39.9MB), run=1001-1001msec 00:12:49.138 WRITE: bw=40.6MiB/s (42.6MB/s), 9.99MiB/s-10.6MiB/s (10.5MB/s-11.1MB/s), io=40.6MiB (42.6MB), run=1001-1001msec 00:12:49.138 00:12:49.138 Disk stats (read/write): 00:12:49.138 nvme0n1: ios=2098/2529, merge=0/0, ticks=455/394, in_queue=849, util=88.88% 00:12:49.138 nvme0n2: ios=2096/2263, merge=0/0, ticks=471/371, in_queue=842, util=89.78% 00:12:49.138 nvme0n3: ios=2048/2177, merge=0/0, ticks=435/372, in_queue=807, util=89.15% 00:12:49.138 nvme0n4: ios=2048/2206, merge=0/0, ticks=430/379, in_queue=809, util=89.71% 00:12:49.138 01:32:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:49.138 [global] 00:12:49.138 thread=1 00:12:49.138 invalidate=1 00:12:49.138 rw=randwrite 00:12:49.138 time_based=1 00:12:49.138 runtime=1 00:12:49.138 ioengine=libaio 00:12:49.138 direct=1 00:12:49.138 bs=4096 00:12:49.138 iodepth=1 00:12:49.138 norandommap=0 00:12:49.138 numjobs=1 00:12:49.138 00:12:49.138 verify_dump=1 00:12:49.138 verify_backlog=512 00:12:49.138 verify_state_save=0 00:12:49.138 do_verify=1 00:12:49.138 verify=crc32c-intel 00:12:49.138 [job0] 00:12:49.138 filename=/dev/nvme0n1 00:12:49.138 [job1] 00:12:49.138 filename=/dev/nvme0n2 00:12:49.138 [job2] 00:12:49.138 filename=/dev/nvme0n3 00:12:49.138 [job3] 00:12:49.138 filename=/dev/nvme0n4 00:12:49.138 Could not set queue depth (nvme0n1) 00:12:49.138 Could not set queue depth (nvme0n2) 00:12:49.138 Could not set queue depth (nvme0n3) 00:12:49.138 Could not set queue depth (nvme0n4) 00:12:49.397 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.397 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.397 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.397 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.397 fio-3.35 00:12:49.397 Starting 4 threads 00:12:50.773 00:12:50.773 job0: (groupid=0, jobs=1): err= 0: pid=68858: Sun Nov 17 01:32:58 2024 00:12:50.773 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:50.773 slat (nsec): min=10821, max=60637, avg=14128.98, stdev=5247.02 00:12:50.773 clat (usec): min=160, max=750, avg=192.96, stdev=22.11 00:12:50.773 lat (usec): min=171, max=762, avg=207.09, stdev=23.82 00:12:50.773 clat percentiles (usec): 00:12:50.773 | 1.00th=[ 167], 5.00th=[ 172], 10.00th=[ 174], 20.00th=[ 178], 00:12:50.773 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 194], 00:12:50.773 | 70.00th=[ 200], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 229], 00:12:50.773 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 330], 99.95th=[ 363], 00:12:50.773 | 99.99th=[ 750] 00:12:50.773 write: IOPS=2806, BW=11.0MiB/s (11.5MB/s)(11.0MiB/1001msec); 0 zone resets 00:12:50.773 slat (usec): min=14, max=106, avg=21.35, stdev= 7.18 00:12:50.773 clat (usec): min=109, max=1876, avg=142.39, stdev=38.11 00:12:50.773 lat (usec): min=126, max=1894, avg=163.74, stdev=39.54 00:12:50.773 clat percentiles (usec): 00:12:50.773 | 1.00th=[ 114], 5.00th=[ 118], 10.00th=[ 121], 20.00th=[ 125], 00:12:50.773 | 30.00th=[ 129], 40.00th=[ 135], 50.00th=[ 141], 60.00th=[ 145], 00:12:50.773 | 70.00th=[ 151], 80.00th=[ 157], 90.00th=[ 165], 95.00th=[ 176], 00:12:50.773 | 99.00th=[ 196], 99.50th=[ 206], 99.90th=[ 289], 99.95th=[ 457], 00:12:50.773 | 99.99th=[ 1876] 00:12:50.773 bw ( KiB/s): min=12288, max=12288, per=35.52%, avg=12288.00, stdev= 0.00, samples=1 00:12:50.773 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:50.773 lat (usec) : 250=99.37%, 500=0.60%, 1000=0.02% 00:12:50.773 lat (msec) : 2=0.02% 00:12:50.773 cpu : usr=2.40%, sys=7.70%, ctx=5369, majf=0, minf=17 00:12:50.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.773 issued rwts: total=2560,2809,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.773 job1: (groupid=0, jobs=1): err= 0: pid=68859: Sun Nov 17 01:32:58 2024 00:12:50.773 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:12:50.773 slat (nsec): min=10984, max=48424, avg=14100.41, stdev=4222.80 00:12:50.773 clat (usec): min=164, max=641, avg=195.70, stdev=25.13 00:12:50.773 lat (usec): min=176, max=663, avg=209.80, stdev=26.09 00:12:50.773 clat percentiles (usec): 00:12:50.773 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:12:50.773 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 190], 60.00th=[ 196], 00:12:50.773 | 70.00th=[ 202], 80.00th=[ 210], 90.00th=[ 223], 95.00th=[ 233], 00:12:50.773 | 99.00th=[ 260], 99.50th=[ 277], 99.90th=[ 529], 99.95th=[ 553], 00:12:50.773 | 99.99th=[ 644] 00:12:50.773 write: IOPS=2774, BW=10.8MiB/s (11.4MB/s)(10.8MiB/1001msec); 0 zone resets 00:12:50.773 slat (nsec): min=15165, max=92175, avg=21731.18, stdev=5955.88 00:12:50.773 clat (usec): min=109, max=546, avg=141.50, stdev=23.63 00:12:50.773 lat (usec): min=126, max=581, avg=163.24, stdev=25.72 00:12:50.773 clat percentiles (usec): 00:12:50.773 | 1.00th=[ 115], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:12:50.773 | 30.00th=[ 128], 40.00th=[ 133], 50.00th=[ 137], 60.00th=[ 143], 00:12:50.773 | 70.00th=[ 149], 80.00th=[ 155], 90.00th=[ 165], 95.00th=[ 176], 00:12:50.773 | 99.00th=[ 204], 99.50th=[ 265], 99.90th=[ 351], 99.95th=[ 465], 00:12:50.773 | 99.99th=[ 545] 00:12:50.773 bw ( KiB/s): min=12288, max=12288, per=35.52%, avg=12288.00, stdev= 0.00, samples=1 00:12:50.773 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:12:50.773 lat (usec) : 250=98.99%, 500=0.94%, 750=0.07% 00:12:50.773 cpu : usr=2.00%, sys=7.90%, ctx=5337, majf=0, minf=10 00:12:50.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.773 issued rwts: total=2560,2777,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.773 job2: (groupid=0, jobs=1): err= 0: pid=68860: Sun Nov 17 01:32:58 2024 00:12:50.773 read: IOPS=1447, BW=5790KiB/s (5929kB/s)(5796KiB/1001msec) 00:12:50.773 slat (nsec): min=10285, max=61774, avg=16089.23, stdev=5410.42 00:12:50.773 clat (usec): min=226, max=908, avg=351.13, stdev=32.27 00:12:50.773 lat (usec): min=245, max=920, avg=367.22, stdev=32.42 00:12:50.773 clat percentiles (usec): 00:12:50.773 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:12:50.773 | 30.00th=[ 338], 40.00th=[ 343], 50.00th=[ 351], 60.00th=[ 355], 00:12:50.773 | 70.00th=[ 363], 80.00th=[ 371], 90.00th=[ 383], 95.00th=[ 392], 00:12:50.773 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 685], 99.95th=[ 906], 00:12:50.773 | 99.99th=[ 906] 00:12:50.773 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:50.773 slat (nsec): min=11876, max=98582, avg=24754.39, stdev=6751.20 00:12:50.773 clat (usec): min=217, max=566, avg=275.84, stdev=25.87 00:12:50.773 lat (usec): min=243, max=604, avg=300.59, stdev=25.81 00:12:50.773 clat percentiles (usec): 00:12:50.773 | 1.00th=[ 231], 5.00th=[ 239], 10.00th=[ 245], 20.00th=[ 253], 00:12:50.773 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:12:50.773 | 70.00th=[ 289], 80.00th=[ 297], 90.00th=[ 310], 95.00th=[ 318], 00:12:50.773 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 424], 99.95th=[ 570], 00:12:50.773 | 99.99th=[ 570] 00:12:50.773 bw ( KiB/s): min= 8184, max= 8184, per=23.65%, avg=8184.00, stdev= 0.00, samples=1 00:12:50.773 iops : min= 2046, max= 2046, avg=2046.00, stdev= 0.00, samples=1 00:12:50.773 lat (usec) : 250=8.31%, 500=91.46%, 750=0.20%, 1000=0.03% 00:12:50.773 cpu : usr=2.30%, sys=4.50%, ctx=2986, majf=0, minf=8 00:12:50.773 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.773 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.773 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.773 issued rwts: total=1449,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.773 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.774 job3: (groupid=0, jobs=1): err= 0: pid=68862: Sun Nov 17 01:32:58 2024 00:12:50.774 read: IOPS=1448, BW=5794KiB/s (5933kB/s)(5800KiB/1001msec) 00:12:50.774 slat (usec): min=8, max=123, avg=17.79, stdev= 6.47 00:12:50.774 clat (usec): min=222, max=981, avg=348.98, stdev=33.12 00:12:50.774 lat (usec): min=242, max=1000, avg=366.78, stdev=33.07 00:12:50.774 clat percentiles (usec): 00:12:50.774 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 318], 20.00th=[ 326], 00:12:50.774 | 30.00th=[ 334], 40.00th=[ 338], 50.00th=[ 347], 60.00th=[ 355], 00:12:50.774 | 70.00th=[ 359], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 392], 00:12:50.774 | 99.00th=[ 420], 99.50th=[ 429], 99.90th=[ 644], 99.95th=[ 979], 00:12:50.774 | 99.99th=[ 979] 00:12:50.774 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:12:50.774 slat (nsec): min=13935, max=98139, avg=24218.99, stdev=7650.55 00:12:50.774 clat (usec): min=223, max=479, avg=276.44, stdev=23.66 00:12:50.774 lat (usec): min=241, max=499, avg=300.66, stdev=25.02 00:12:50.774 clat percentiles (usec): 00:12:50.774 | 1.00th=[ 237], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:12:50.774 | 30.00th=[ 262], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:12:50.774 | 70.00th=[ 285], 80.00th=[ 297], 90.00th=[ 306], 95.00th=[ 318], 00:12:50.774 | 99.00th=[ 338], 99.50th=[ 351], 99.90th=[ 424], 99.95th=[ 478], 00:12:50.774 | 99.99th=[ 478] 00:12:50.774 bw ( KiB/s): min= 8176, max= 8176, per=23.63%, avg=8176.00, stdev= 0.00, samples=1 00:12:50.774 iops : min= 2044, max= 2044, avg=2044.00, stdev= 0.00, samples=1 00:12:50.774 lat (usec) : 250=4.99%, 500=94.81%, 750=0.17%, 1000=0.03% 00:12:50.774 cpu : usr=1.80%, sys=5.20%, ctx=2987, majf=0, minf=11 00:12:50.774 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:50.774 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.774 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.774 issued rwts: total=1450,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.774 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:50.774 00:12:50.774 Run status group 0 (all jobs): 00:12:50.774 READ: bw=31.3MiB/s (32.8MB/s), 5790KiB/s-9.99MiB/s (5929kB/s-10.5MB/s), io=31.3MiB (32.8MB), run=1001-1001msec 00:12:50.774 WRITE: bw=33.8MiB/s (35.4MB/s), 6138KiB/s-11.0MiB/s (6285kB/s-11.5MB/s), io=33.8MiB (35.5MB), run=1001-1001msec 00:12:50.774 00:12:50.774 Disk stats (read/write): 00:12:50.774 nvme0n1: ios=2137/2560, merge=0/0, ticks=491/390, in_queue=881, util=88.78% 00:12:50.774 nvme0n2: ios=2135/2560, merge=0/0, ticks=446/383, in_queue=829, util=88.88% 00:12:50.774 nvme0n3: ios=1078/1536, merge=0/0, ticks=365/409, in_queue=774, util=88.97% 00:12:50.774 nvme0n4: ios=1078/1536, merge=0/0, ticks=357/405, in_queue=762, util=89.80% 00:12:50.774 01:32:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:50.774 [global] 00:12:50.774 thread=1 00:12:50.774 invalidate=1 00:12:50.774 rw=write 00:12:50.774 time_based=1 00:12:50.774 runtime=1 00:12:50.774 ioengine=libaio 00:12:50.774 direct=1 00:12:50.774 bs=4096 00:12:50.774 iodepth=128 00:12:50.774 norandommap=0 00:12:50.774 numjobs=1 00:12:50.774 00:12:50.774 verify_dump=1 00:12:50.774 verify_backlog=512 00:12:50.774 verify_state_save=0 00:12:50.774 do_verify=1 00:12:50.774 verify=crc32c-intel 00:12:50.774 [job0] 00:12:50.774 filename=/dev/nvme0n1 00:12:50.774 [job1] 00:12:50.774 filename=/dev/nvme0n2 00:12:50.774 [job2] 00:12:50.774 filename=/dev/nvme0n3 00:12:50.774 [job3] 00:12:50.774 filename=/dev/nvme0n4 00:12:50.774 Could not set queue depth (nvme0n1) 00:12:50.774 Could not set queue depth (nvme0n2) 00:12:50.774 Could not set queue depth (nvme0n3) 00:12:50.774 Could not set queue depth (nvme0n4) 00:12:50.774 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:50.774 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:50.774 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:50.774 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:50.774 fio-3.35 00:12:50.774 Starting 4 threads 00:12:52.151 00:12:52.151 job0: (groupid=0, jobs=1): err= 0: pid=68922: Sun Nov 17 01:33:00 2024 00:12:52.151 read: IOPS=2074, BW=8299KiB/s (8498kB/s)(8324KiB/1003msec) 00:12:52.151 slat (usec): min=7, max=9355, avg=219.65, stdev=1006.29 00:12:52.151 clat (usec): min=1862, max=35077, avg=27928.98, stdev=3716.72 00:12:52.151 lat (usec): min=4828, max=35090, avg=28148.62, stdev=3591.65 00:12:52.151 clat percentiles (usec): 00:12:52.151 | 1.00th=[11863], 5.00th=[23200], 10.00th=[24249], 20.00th=[26084], 00:12:52.151 | 30.00th=[26870], 40.00th=[27657], 50.00th=[27919], 60.00th=[28181], 00:12:52.151 | 70.00th=[28705], 80.00th=[30802], 90.00th=[32375], 95.00th=[32900], 00:12:52.151 | 99.00th=[34866], 99.50th=[34866], 99.90th=[34866], 99.95th=[34866], 00:12:52.151 | 99.99th=[34866] 00:12:52.151 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:12:52.151 slat (usec): min=10, max=12594, avg=204.28, stdev=931.16 00:12:52.151 clat (usec): min=12719, max=38129, avg=26642.28, stdev=3377.96 00:12:52.151 lat (usec): min=12746, max=38256, avg=26846.56, stdev=3275.29 00:12:52.151 clat percentiles (usec): 00:12:52.151 | 1.00th=[16581], 5.00th=[21365], 10.00th=[22938], 20.00th=[24511], 00:12:52.151 | 30.00th=[25560], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:12:52.151 | 70.00th=[27919], 80.00th=[28705], 90.00th=[30016], 95.00th=[31065], 00:12:52.151 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38011], 99.95th=[38011], 00:12:52.151 | 99.99th=[38011] 00:12:52.151 bw ( KiB/s): min= 9736, max= 9992, per=16.93%, avg=9864.00, stdev=181.02, samples=2 00:12:52.151 iops : min= 2434, max= 2498, avg=2466.00, stdev=45.25, samples=2 00:12:52.151 lat (msec) : 2=0.02%, 10=0.37%, 20=1.44%, 50=98.17% 00:12:52.151 cpu : usr=2.20%, sys=7.49%, ctx=411, majf=0, minf=1 00:12:52.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.6% 00:12:52.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:52.151 issued rwts: total=2081,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:52.151 job1: (groupid=0, jobs=1): err= 0: pid=68923: Sun Nov 17 01:33:00 2024 00:12:52.151 read: IOPS=4690, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1003msec) 00:12:52.151 slat (usec): min=8, max=3258, avg=98.23, stdev=465.03 00:12:52.151 clat (usec): min=240, max=14424, avg=12954.64, stdev=1203.29 00:12:52.151 lat (usec): min=3152, max=14441, avg=13052.87, stdev=1111.98 00:12:52.151 clat percentiles (usec): 00:12:52.151 | 1.00th=[ 6652], 5.00th=[11338], 10.00th=[12387], 20.00th=[12649], 00:12:52.151 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:12:52.151 | 70.00th=[13435], 80.00th=[13566], 90.00th=[13829], 95.00th=[14091], 00:12:52.151 | 99.00th=[14353], 99.50th=[14353], 99.90th=[14353], 99.95th=[14484], 00:12:52.151 | 99.99th=[14484] 00:12:52.151 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:12:52.151 slat (usec): min=10, max=4921, avg=97.49, stdev=422.57 00:12:52.151 clat (usec): min=9590, max=15329, avg=12814.38, stdev=667.26 00:12:52.151 lat (usec): min=11333, max=15348, avg=12911.86, stdev=517.61 00:12:52.151 clat percentiles (usec): 00:12:52.151 | 1.00th=[10290], 5.00th=[11994], 10.00th=[12256], 20.00th=[12387], 00:12:52.151 | 30.00th=[12649], 40.00th=[12649], 50.00th=[12780], 60.00th=[12911], 00:12:52.151 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[13698], 00:12:52.151 | 99.00th=[14877], 99.50th=[15139], 99.90th=[15270], 99.95th=[15270], 00:12:52.151 | 99.99th=[15270] 00:12:52.151 bw ( KiB/s): min=20232, max=20480, per=34.93%, avg=20356.00, stdev=175.36, samples=2 00:12:52.151 iops : min= 5058, max= 5120, avg=5089.00, stdev=43.84, samples=2 00:12:52.151 lat (usec) : 250=0.01% 00:12:52.151 lat (msec) : 4=0.33%, 10=0.95%, 20=98.72% 00:12:52.151 cpu : usr=4.09%, sys=14.07%, ctx=310, majf=0, minf=2 00:12:52.151 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:52.151 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.151 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:52.151 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.151 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:52.151 job2: (groupid=0, jobs=1): err= 0: pid=68924: Sun Nov 17 01:33:00 2024 00:12:52.151 read: IOPS=3334, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1003msec) 00:12:52.151 slat (usec): min=5, max=7638, avg=144.80, stdev=627.29 00:12:52.151 clat (usec): min=1445, max=38617, avg=18264.09, stdev=6607.47 00:12:52.151 lat (usec): min=4660, max=38630, avg=18408.89, stdev=6634.64 00:12:52.151 clat percentiles (usec): 00:12:52.151 | 1.00th=[11338], 5.00th=[14091], 10.00th=[14353], 20.00th=[14484], 00:12:52.151 | 30.00th=[14877], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:12:52.151 | 70.00th=[16319], 80.00th=[21890], 90.00th=[30278], 95.00th=[33817], 00:12:52.151 | 99.00th=[36963], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:12:52.151 | 99.99th=[38536] 00:12:52.151 write: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec); 0 zone resets 00:12:52.151 slat (usec): min=10, max=7337, avg=135.68, stdev=553.85 00:12:52.151 clat (usec): min=10722, max=39081, avg=18260.14, stdev=6278.01 00:12:52.151 lat (usec): min=12290, max=39105, avg=18395.82, stdev=6299.34 00:12:52.151 clat percentiles (usec): 00:12:52.151 | 1.00th=[11600], 5.00th=[13435], 10.00th=[13829], 20.00th=[14091], 00:12:52.151 | 30.00th=[14353], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:12:52.151 | 70.00th=[17433], 80.00th=[25822], 90.00th=[28181], 95.00th=[30278], 00:12:52.151 | 99.00th=[38011], 99.50th=[38011], 99.90th=[39060], 99.95th=[39060], 00:12:52.151 | 99.99th=[39060] 00:12:52.151 bw ( KiB/s): min=11256, max=17416, per=24.60%, avg=14336.00, stdev=4355.78, samples=2 00:12:52.151 iops : min= 2814, max= 4354, avg=3584.00, stdev=1088.94, samples=2 00:12:52.151 lat (msec) : 2=0.01%, 10=0.40%, 20=74.20%, 50=25.39% 00:12:52.152 cpu : usr=3.79%, sys=10.38%, ctx=488, majf=0, minf=1 00:12:52.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:52.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:52.152 issued rwts: total=3345,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:52.152 job3: (groupid=0, jobs=1): err= 0: pid=68925: Sun Nov 17 01:33:00 2024 00:12:52.152 read: IOPS=3065, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1002msec) 00:12:52.152 slat (usec): min=6, max=9364, avg=153.00, stdev=818.02 00:12:52.152 clat (usec): min=11054, max=30629, avg=19816.31, stdev=6260.34 00:12:52.152 lat (usec): min=13655, max=30645, avg=19969.31, stdev=6259.78 00:12:52.152 clat percentiles (usec): 00:12:52.152 | 1.00th=[11731], 5.00th=[13960], 10.00th=[14091], 20.00th=[14353], 00:12:52.152 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15139], 60.00th=[21365], 00:12:52.152 | 70.00th=[26608], 80.00th=[27395], 90.00th=[27919], 95.00th=[28443], 00:12:52.152 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:12:52.152 | 99.99th=[30540] 00:12:52.152 write: IOPS=3340, BW=13.0MiB/s (13.7MB/s)(13.1MiB/1002msec); 0 zone resets 00:12:52.152 slat (usec): min=13, max=8831, avg=150.02, stdev=755.18 00:12:52.152 clat (usec): min=1704, max=29741, avg=19352.37, stdev=6342.50 00:12:52.152 lat (usec): min=1731, max=29759, avg=19502.39, stdev=6341.61 00:12:52.152 clat percentiles (usec): 00:12:52.152 | 1.00th=[ 5407], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:12:52.152 | 30.00th=[14353], 40.00th=[14615], 50.00th=[15008], 60.00th=[24249], 00:12:52.152 | 70.00th=[25822], 80.00th=[26346], 90.00th=[26870], 95.00th=[27919], 00:12:52.152 | 99.00th=[29492], 99.50th=[29754], 99.90th=[29754], 99.95th=[29754], 00:12:52.152 | 99.99th=[29754] 00:12:52.152 bw ( KiB/s): min= 9992, max=15768, per=22.10%, avg=12880.00, stdev=4084.25, samples=2 00:12:52.152 iops : min= 2498, max= 3942, avg=3220.00, stdev=1021.06, samples=2 00:12:52.152 lat (msec) : 2=0.19%, 4=0.11%, 10=1.00%, 20=55.49%, 50=43.22% 00:12:52.152 cpu : usr=3.50%, sys=9.29%, ctx=202, majf=0, minf=8 00:12:52.152 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:52.152 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.152 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:52.152 issued rwts: total=3072,3347,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.152 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:52.152 00:12:52.152 Run status group 0 (all jobs): 00:12:52.152 READ: bw=51.4MiB/s (53.9MB/s), 8299KiB/s-18.3MiB/s (8498kB/s-19.2MB/s), io=51.6MiB (54.1MB), run=1002-1003msec 00:12:52.152 WRITE: bw=56.9MiB/s (59.7MB/s), 9.97MiB/s-19.9MiB/s (10.5MB/s-20.9MB/s), io=57.1MiB (59.8MB), run=1002-1003msec 00:12:52.152 00:12:52.152 Disk stats (read/write): 00:12:52.152 nvme0n1: ios=1984/2048, merge=0/0, ticks=13139/12375, in_queue=25514, util=87.56% 00:12:52.152 nvme0n2: ios=4101/4352, merge=0/0, ticks=12033/12000, in_queue=24033, util=87.88% 00:12:52.152 nvme0n3: ios=3072/3259, merge=0/0, ticks=12397/12238, in_queue=24635, util=89.35% 00:12:52.152 nvme0n4: ios=2560/2624, merge=0/0, ticks=12682/12414, in_queue=25096, util=89.50% 00:12:52.152 01:33:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:52.152 [global] 00:12:52.152 thread=1 00:12:52.152 invalidate=1 00:12:52.152 rw=randwrite 00:12:52.152 time_based=1 00:12:52.152 runtime=1 00:12:52.152 ioengine=libaio 00:12:52.152 direct=1 00:12:52.152 bs=4096 00:12:52.152 iodepth=128 00:12:52.152 norandommap=0 00:12:52.152 numjobs=1 00:12:52.152 00:12:52.152 verify_dump=1 00:12:52.152 verify_backlog=512 00:12:52.152 verify_state_save=0 00:12:52.152 do_verify=1 00:12:52.152 verify=crc32c-intel 00:12:52.152 [job0] 00:12:52.152 filename=/dev/nvme0n1 00:12:52.152 [job1] 00:12:52.152 filename=/dev/nvme0n2 00:12:52.152 [job2] 00:12:52.152 filename=/dev/nvme0n3 00:12:52.152 [job3] 00:12:52.152 filename=/dev/nvme0n4 00:12:52.152 Could not set queue depth (nvme0n1) 00:12:52.152 Could not set queue depth (nvme0n2) 00:12:52.152 Could not set queue depth (nvme0n3) 00:12:52.152 Could not set queue depth (nvme0n4) 00:12:52.152 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.152 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.152 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.152 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.152 fio-3.35 00:12:52.152 Starting 4 threads 00:12:53.530 00:12:53.530 job0: (groupid=0, jobs=1): err= 0: pid=68978: Sun Nov 17 01:33:01 2024 00:12:53.530 read: IOPS=2094, BW=8377KiB/s (8578kB/s)(8444KiB/1008msec) 00:12:53.530 slat (usec): min=7, max=14043, avg=208.51, stdev=1443.31 00:12:53.530 clat (usec): min=3044, max=45852, avg=28200.17, stdev=3909.32 00:12:53.530 lat (usec): min=15342, max=55110, avg=28408.68, stdev=3958.22 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[15664], 5.00th=[17695], 10.00th=[26608], 20.00th=[27657], 00:12:53.531 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28443], 60.00th=[28967], 00:12:53.531 | 70.00th=[29230], 80.00th=[29492], 90.00th=[30540], 95.00th=[30802], 00:12:53.531 | 99.00th=[43779], 99.50th=[44827], 99.90th=[45876], 99.95th=[45876], 00:12:53.531 | 99.99th=[45876] 00:12:53.531 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:12:53.531 slat (usec): min=13, max=21982, avg=211.86, stdev=1440.73 00:12:53.531 clat (usec): min=13335, max=37689, avg=26615.63, stdev=2865.94 00:12:53.531 lat (usec): min=16461, max=37881, avg=26827.49, stdev=2562.57 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[15533], 5.00th=[23987], 10.00th=[24249], 20.00th=[25035], 00:12:53.531 | 30.00th=[25822], 40.00th=[26346], 50.00th=[26608], 60.00th=[26870], 00:12:53.531 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:12:53.531 | 99.00th=[37487], 99.50th=[37487], 99.90th=[37487], 99.95th=[37487], 00:12:53.531 | 99.99th=[37487] 00:12:53.531 bw ( KiB/s): min= 9739, max=10240, per=16.95%, avg=9989.50, stdev=354.26, samples=2 00:12:53.531 iops : min= 2434, max= 2560, avg=2497.00, stdev=89.10, samples=2 00:12:53.531 lat (msec) : 4=0.02%, 20=4.26%, 50=95.72% 00:12:53.531 cpu : usr=2.18%, sys=7.85%, ctx=131, majf=0, minf=9 00:12:53.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:53.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.531 issued rwts: total=2111,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.531 job1: (groupid=0, jobs=1): err= 0: pid=68979: Sun Nov 17 01:33:01 2024 00:12:53.531 read: IOPS=4962, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1005msec) 00:12:53.531 slat (usec): min=5, max=8686, avg=94.83, stdev=590.80 00:12:53.531 clat (usec): min=1535, max=21037, avg=13097.95, stdev=1649.67 00:12:53.531 lat (usec): min=6374, max=25604, avg=13192.78, stdev=1650.27 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[ 7046], 5.00th=[10028], 10.00th=[12125], 20.00th=[12518], 00:12:53.531 | 30.00th=[12780], 40.00th=[12911], 50.00th=[13173], 60.00th=[13304], 00:12:53.531 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:12:53.531 | 99.00th=[20317], 99.50th=[20579], 99.90th=[21103], 99.95th=[21103], 00:12:53.531 | 99.99th=[21103] 00:12:53.531 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:53.531 slat (usec): min=9, max=9626, avg=95.75, stdev=572.07 00:12:53.531 clat (usec): min=6468, max=17543, avg=12116.71, stdev=1157.13 00:12:53.531 lat (usec): min=8895, max=17564, avg=12212.47, stdev=1049.64 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[ 7963], 5.00th=[10814], 10.00th=[11076], 20.00th=[11469], 00:12:53.531 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:12:53.531 | 70.00th=[12518], 80.00th=[12649], 90.00th=[12911], 95.00th=[13042], 00:12:53.531 | 99.00th=[17433], 99.50th=[17433], 99.90th=[17433], 99.95th=[17433], 00:12:53.531 | 99.99th=[17433] 00:12:53.531 bw ( KiB/s): min=20480, max=20521, per=34.79%, avg=20500.50, stdev=28.99, samples=2 00:12:53.531 iops : min= 5120, max= 5130, avg=5125.00, stdev= 7.07, samples=2 00:12:53.531 lat (msec) : 2=0.01%, 10=4.25%, 20=95.09%, 50=0.64% 00:12:53.531 cpu : usr=3.88%, sys=13.94%, ctx=213, majf=0, minf=4 00:12:53.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:53.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.531 issued rwts: total=4987,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.531 job2: (groupid=0, jobs=1): err= 0: pid=68980: Sun Nov 17 01:33:01 2024 00:12:53.531 read: IOPS=2094, BW=8377KiB/s (8578kB/s)(8444KiB/1008msec) 00:12:53.531 slat (usec): min=8, max=13910, avg=208.30, stdev=1431.31 00:12:53.531 clat (usec): min=4158, max=46037, avg=28198.85, stdev=3912.07 00:12:53.531 lat (usec): min=15234, max=55673, avg=28407.15, stdev=3959.24 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[15533], 5.00th=[17695], 10.00th=[26608], 20.00th=[27657], 00:12:53.531 | 30.00th=[27919], 40.00th=[28181], 50.00th=[28705], 60.00th=[28967], 00:12:53.531 | 70.00th=[29230], 80.00th=[29492], 90.00th=[30802], 95.00th=[31065], 00:12:53.531 | 99.00th=[43254], 99.50th=[44303], 99.90th=[45876], 99.95th=[45876], 00:12:53.531 | 99.99th=[45876] 00:12:53.531 write: IOPS=2539, BW=9.92MiB/s (10.4MB/s)(10.0MiB/1008msec); 0 zone resets 00:12:53.531 slat (usec): min=9, max=22898, avg=212.19, stdev=1461.34 00:12:53.531 clat (usec): min=12819, max=38411, avg=26643.26, stdev=2975.75 00:12:53.531 lat (usec): min=16422, max=38439, avg=26855.46, stdev=2680.12 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[15533], 5.00th=[23987], 10.00th=[24249], 20.00th=[25035], 00:12:53.531 | 30.00th=[25822], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:12:53.531 | 70.00th=[27395], 80.00th=[28181], 90.00th=[28443], 95.00th=[29230], 00:12:53.531 | 99.00th=[38011], 99.50th=[38011], 99.90th=[38536], 99.95th=[38536], 00:12:53.531 | 99.99th=[38536] 00:12:53.531 bw ( KiB/s): min= 9739, max=10240, per=16.95%, avg=9989.50, stdev=354.26, samples=2 00:12:53.531 iops : min= 2434, max= 2560, avg=2497.00, stdev=89.10, samples=2 00:12:53.531 lat (msec) : 10=0.02%, 20=4.39%, 50=95.59% 00:12:53.531 cpu : usr=3.08%, sys=6.45%, ctx=97, majf=0, minf=7 00:12:53.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:12:53.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.531 issued rwts: total=2111,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.531 job3: (groupid=0, jobs=1): err= 0: pid=68981: Sun Nov 17 01:33:01 2024 00:12:53.531 read: IOPS=4150, BW=16.2MiB/s (17.0MB/s)(16.2MiB/1002msec) 00:12:53.531 slat (usec): min=7, max=7719, avg=109.14, stdev=689.32 00:12:53.531 clat (usec): min=1234, max=23975, avg=15177.98, stdev=1945.79 00:12:53.531 lat (usec): min=6530, max=29050, avg=15287.13, stdev=1968.45 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[ 7177], 5.00th=[11338], 10.00th=[14091], 20.00th=[14615], 00:12:53.531 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15270], 60.00th=[15533], 00:12:53.531 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16319], 95.00th=[16712], 00:12:53.531 | 99.00th=[23200], 99.50th=[23725], 99.90th=[23987], 99.95th=[23987], 00:12:53.531 | 99.99th=[23987] 00:12:53.531 write: IOPS=4598, BW=18.0MiB/s (18.8MB/s)(18.0MiB/1002msec); 0 zone resets 00:12:53.531 slat (usec): min=10, max=9260, avg=110.43, stdev=660.52 00:12:53.531 clat (usec): min=7218, max=19245, avg=13843.34, stdev=1293.47 00:12:53.531 lat (usec): min=7550, max=19275, avg=13953.77, stdev=1146.29 00:12:53.531 clat percentiles (usec): 00:12:53.531 | 1.00th=[ 8979], 5.00th=[12387], 10.00th=[12649], 20.00th=[13042], 00:12:53.531 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[14091], 00:12:53.531 | 70.00th=[14353], 80.00th=[14484], 90.00th=[14877], 95.00th=[15270], 00:12:53.531 | 99.00th=[18744], 99.50th=[19006], 99.90th=[19268], 99.95th=[19268], 00:12:53.531 | 99.99th=[19268] 00:12:53.531 bw ( KiB/s): min=17912, max=18440, per=30.85%, avg=18176.00, stdev=373.35, samples=2 00:12:53.531 iops : min= 4478, max= 4610, avg=4544.00, stdev=93.34, samples=2 00:12:53.531 lat (msec) : 2=0.01%, 10=2.56%, 20=96.56%, 50=0.88% 00:12:53.531 cpu : usr=3.80%, sys=13.59%, ctx=186, majf=0, minf=2 00:12:53.531 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:53.531 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.531 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.531 issued rwts: total=4159,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.531 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.531 00:12:53.531 Run status group 0 (all jobs): 00:12:53.531 READ: bw=51.8MiB/s (54.3MB/s), 8377KiB/s-19.4MiB/s (8578kB/s-20.3MB/s), io=52.2MiB (54.8MB), run=1002-1008msec 00:12:53.531 WRITE: bw=57.5MiB/s (60.3MB/s), 9.92MiB/s-19.9MiB/s (10.4MB/s-20.9MB/s), io=58.0MiB (60.8MB), run=1002-1008msec 00:12:53.531 00:12:53.531 Disk stats (read/write): 00:12:53.531 nvme0n1: ios=1918/2048, merge=0/0, ticks=51207/51748, in_queue=102955, util=89.48% 00:12:53.531 nvme0n2: ios=4145/4542, merge=0/0, ticks=50842/50309, in_queue=101151, util=88.47% 00:12:53.531 nvme0n3: ios=1868/2048, merge=0/0, ticks=51301/52004, in_queue=103305, util=89.06% 00:12:53.531 nvme0n4: ios=3605/3840, merge=0/0, ticks=51526/49299, in_queue=100825, util=90.33% 00:12:53.531 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:53.531 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=68994 00:12:53.531 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:53.531 01:33:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:53.531 [global] 00:12:53.531 thread=1 00:12:53.531 invalidate=1 00:12:53.531 rw=read 00:12:53.531 time_based=1 00:12:53.531 runtime=10 00:12:53.531 ioengine=libaio 00:12:53.531 direct=1 00:12:53.531 bs=4096 00:12:53.531 iodepth=1 00:12:53.531 norandommap=1 00:12:53.531 numjobs=1 00:12:53.531 00:12:53.531 [job0] 00:12:53.531 filename=/dev/nvme0n1 00:12:53.531 [job1] 00:12:53.531 filename=/dev/nvme0n2 00:12:53.531 [job2] 00:12:53.531 filename=/dev/nvme0n3 00:12:53.531 [job3] 00:12:53.531 filename=/dev/nvme0n4 00:12:53.531 Could not set queue depth (nvme0n1) 00:12:53.531 Could not set queue depth (nvme0n2) 00:12:53.531 Could not set queue depth (nvme0n3) 00:12:53.531 Could not set queue depth (nvme0n4) 00:12:53.531 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:53.531 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:53.531 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:53.531 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:53.531 fio-3.35 00:12:53.531 Starting 4 threads 00:12:56.818 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:56.818 fio: pid=69043, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:56.818 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=34717696, buflen=4096 00:12:56.818 01:33:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:56.818 fio: pid=69042, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:56.818 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=40939520, buflen=4096 00:12:56.818 01:33:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:56.818 01:33:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:57.077 fio: pid=69040, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:57.077 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=42356736, buflen=4096 00:12:57.335 01:33:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:57.335 01:33:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:57.608 fio: pid=69041, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:57.608 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=52944896, buflen=4096 00:12:57.608 00:12:57.608 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69040: Sun Nov 17 01:33:05 2024 00:12:57.608 read: IOPS=2945, BW=11.5MiB/s (12.1MB/s)(40.4MiB/3511msec) 00:12:57.608 slat (usec): min=7, max=12653, avg=20.80, stdev=214.61 00:12:57.608 clat (usec): min=158, max=2702, avg=316.81, stdev=63.83 00:12:57.608 lat (usec): min=172, max=12891, avg=337.61, stdev=223.40 00:12:57.608 clat percentiles (usec): 00:12:57.608 | 1.00th=[ 182], 5.00th=[ 243], 10.00th=[ 265], 20.00th=[ 289], 00:12:57.608 | 30.00th=[ 302], 40.00th=[ 310], 50.00th=[ 318], 60.00th=[ 326], 00:12:57.608 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 367], 00:12:57.608 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 873], 99.95th=[ 1385], 00:12:57.608 | 99.99th=[ 2089] 00:12:57.608 bw ( KiB/s): min=11240, max=11952, per=27.23%, avg=11517.33, stdev=283.28, samples=6 00:12:57.608 iops : min= 2810, max= 2988, avg=2879.33, stdev=70.82, samples=6 00:12:57.608 lat (usec) : 250=6.64%, 500=91.88%, 750=1.30%, 1000=0.10% 00:12:57.608 lat (msec) : 2=0.06%, 4=0.02% 00:12:57.608 cpu : usr=1.25%, sys=4.33%, ctx=10360, majf=0, minf=1 00:12:57.608 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.608 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.608 issued rwts: total=10342,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.609 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69041: Sun Nov 17 01:33:05 2024 00:12:57.609 read: IOPS=3275, BW=12.8MiB/s (13.4MB/s)(50.5MiB/3947msec) 00:12:57.609 slat (usec): min=7, max=18000, avg=18.42, stdev=230.50 00:12:57.609 clat (usec): min=150, max=3609, avg=285.45, stdev=77.99 00:12:57.609 lat (usec): min=162, max=18374, avg=303.87, stdev=244.56 00:12:57.609 clat percentiles (usec): 00:12:57.609 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 180], 20.00th=[ 245], 00:12:57.609 | 30.00th=[ 273], 40.00th=[ 285], 50.00th=[ 297], 60.00th=[ 306], 00:12:57.609 | 70.00th=[ 318], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 359], 00:12:57.609 | 99.00th=[ 388], 99.50th=[ 420], 99.90th=[ 807], 99.95th=[ 1352], 00:12:57.609 | 99.99th=[ 3195] 00:12:57.609 bw ( KiB/s): min=11776, max=13515, per=29.46%, avg=12462.29, stdev=635.78, samples=7 00:12:57.609 iops : min= 2944, max= 3378, avg=3115.43, stdev=158.77, samples=7 00:12:57.609 lat (usec) : 250=21.23%, 500=78.50%, 750=0.14%, 1000=0.05% 00:12:57.609 lat (msec) : 2=0.04%, 4=0.03% 00:12:57.609 cpu : usr=1.09%, sys=4.31%, ctx=12939, majf=0, minf=2 00:12:57.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.609 issued rwts: total=12927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.609 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69042: Sun Nov 17 01:33:05 2024 00:12:57.609 read: IOPS=3125, BW=12.2MiB/s (12.8MB/s)(39.0MiB/3198msec) 00:12:57.609 slat (usec): min=8, max=7671, avg=16.81, stdev=105.95 00:12:57.609 clat (usec): min=143, max=3593, avg=301.45, stdev=56.86 00:12:57.609 lat (usec): min=184, max=7964, avg=318.26, stdev=120.56 00:12:57.609 clat percentiles (usec): 00:12:57.609 | 1.00th=[ 186], 5.00th=[ 210], 10.00th=[ 260], 20.00th=[ 277], 00:12:57.609 | 30.00th=[ 289], 40.00th=[ 297], 50.00th=[ 306], 60.00th=[ 314], 00:12:57.609 | 70.00th=[ 322], 80.00th=[ 330], 90.00th=[ 347], 95.00th=[ 355], 00:12:57.609 | 99.00th=[ 379], 99.50th=[ 388], 99.90th=[ 529], 99.95th=[ 873], 00:12:57.609 | 99.99th=[ 3589] 00:12:57.609 bw ( KiB/s): min=11768, max=12984, per=29.00%, avg=12268.00, stdev=488.48, samples=6 00:12:57.609 iops : min= 2942, max= 3246, avg=3067.00, stdev=122.12, samples=6 00:12:57.609 lat (usec) : 250=9.44%, 500=90.43%, 750=0.06%, 1000=0.02% 00:12:57.609 lat (msec) : 2=0.02%, 4=0.02% 00:12:57.609 cpu : usr=1.31%, sys=4.57%, ctx=10001, majf=0, minf=2 00:12:57.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.609 issued rwts: total=9996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.609 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=69043: Sun Nov 17 01:33:05 2024 00:12:57.609 read: IOPS=2879, BW=11.2MiB/s (11.8MB/s)(33.1MiB/2944msec) 00:12:57.609 slat (usec): min=13, max=268, avg=20.26, stdev= 5.90 00:12:57.609 clat (usec): min=190, max=2701, avg=324.82, stdev=54.52 00:12:57.609 lat (usec): min=206, max=2745, avg=345.08, stdev=55.13 00:12:57.609 clat percentiles (usec): 00:12:57.609 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 289], 20.00th=[ 302], 00:12:57.609 | 30.00th=[ 310], 40.00th=[ 314], 50.00th=[ 322], 60.00th=[ 326], 00:12:57.609 | 70.00th=[ 334], 80.00th=[ 343], 90.00th=[ 355], 95.00th=[ 367], 00:12:57.609 | 99.00th=[ 519], 99.50th=[ 553], 99.90th=[ 734], 99.95th=[ 840], 00:12:57.609 | 99.99th=[ 2704] 00:12:57.609 bw ( KiB/s): min=11344, max=12016, per=27.37%, avg=11576.00, stdev=275.22, samples=5 00:12:57.609 iops : min= 2836, max= 3004, avg=2894.00, stdev=68.80, samples=5 00:12:57.609 lat (usec) : 250=0.45%, 500=98.28%, 750=1.17%, 1000=0.06% 00:12:57.609 lat (msec) : 2=0.01%, 4=0.02% 00:12:57.609 cpu : usr=1.22%, sys=5.06%, ctx=8477, majf=0, minf=2 00:12:57.609 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:57.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.609 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.609 issued rwts: total=8477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.609 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:57.609 00:12:57.609 Run status group 0 (all jobs): 00:12:57.609 READ: bw=41.3MiB/s (43.3MB/s), 11.2MiB/s-12.8MiB/s (11.8MB/s-13.4MB/s), io=163MiB (171MB), run=2944-3947msec 00:12:57.609 00:12:57.609 Disk stats (read/write): 00:12:57.609 nvme0n1: ios=9830/0, merge=0/0, ticks=3154/0, in_queue=3154, util=95.19% 00:12:57.609 nvme0n2: ios=12512/0, merge=0/0, ticks=3465/0, in_queue=3465, util=95.51% 00:12:57.609 nvme0n3: ios=9642/0, merge=0/0, ticks=2857/0, in_queue=2857, util=96.40% 00:12:57.609 nvme0n4: ios=8264/0, merge=0/0, ticks=2736/0, in_queue=2736, util=96.79% 00:12:57.869 01:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:57.869 01:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:58.128 01:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:58.128 01:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:58.694 01:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:58.694 01:33:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:58.961 01:33:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:58.961 01:33:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:59.598 01:33:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:59.598 01:33:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 68994 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:59.857 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:59.857 nvmf hotplug test: fio failed as expected 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:59.857 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:00.116 rmmod nvme_tcp 00:13:00.116 rmmod nvme_fabrics 00:13:00.116 rmmod nvme_keyring 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 68606 ']' 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 68606 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 68606 ']' 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 68606 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:00.116 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68606 00:13:00.375 killing process with pid 68606 00:13:00.375 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:00.375 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:00.375 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68606' 00:13:00.375 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 68606 00:13:00.375 01:33:08 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 68606 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@300 -- # return 0 00:13:01.320 00:13:01.320 real 0m22.309s 00:13:01.320 user 1m22.989s 00:13:01.320 sys 0m10.557s 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.320 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:01.320 ************************************ 00:13:01.320 END TEST nvmf_fio_target 00:13:01.320 ************************************ 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:01.581 ************************************ 00:13:01.581 START TEST nvmf_bdevio 00:13:01.581 ************************************ 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:01.581 * Looking for test storage... 00:13:01.581 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:01.581 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.582 --rc genhtml_branch_coverage=1 00:13:01.582 --rc genhtml_function_coverage=1 00:13:01.582 --rc genhtml_legend=1 00:13:01.582 --rc geninfo_all_blocks=1 00:13:01.582 --rc geninfo_unexecuted_blocks=1 00:13:01.582 00:13:01.582 ' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.582 --rc genhtml_branch_coverage=1 00:13:01.582 --rc genhtml_function_coverage=1 00:13:01.582 --rc genhtml_legend=1 00:13:01.582 --rc geninfo_all_blocks=1 00:13:01.582 --rc geninfo_unexecuted_blocks=1 00:13:01.582 00:13:01.582 ' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.582 --rc genhtml_branch_coverage=1 00:13:01.582 --rc genhtml_function_coverage=1 00:13:01.582 --rc genhtml_legend=1 00:13:01.582 --rc geninfo_all_blocks=1 00:13:01.582 --rc geninfo_unexecuted_blocks=1 00:13:01.582 00:13:01.582 ' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:01.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:01.582 --rc genhtml_branch_coverage=1 00:13:01.582 --rc genhtml_function_coverage=1 00:13:01.582 --rc genhtml_legend=1 00:13:01.582 --rc geninfo_all_blocks=1 00:13:01.582 --rc geninfo_unexecuted_blocks=1 00:13:01.582 00:13:01.582 ' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:01.582 01:33:09 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:01.582 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:01.582 Cannot find device "nvmf_init_br" 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # true 00:13:01.582 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:01.583 Cannot find device "nvmf_init_br2" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:01.842 Cannot find device "nvmf_tgt_br" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@164 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:01.842 Cannot find device "nvmf_tgt_br2" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@165 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:01.842 Cannot find device "nvmf_init_br" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@166 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:01.842 Cannot find device "nvmf_init_br2" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@167 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:01.842 Cannot find device "nvmf_tgt_br" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@168 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:01.842 Cannot find device "nvmf_tgt_br2" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:01.842 Cannot find device "nvmf_br" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@170 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:01.842 Cannot find device "nvmf_init_if" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:01.842 Cannot find device "nvmf_init_if2" 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:01.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@173 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:01.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@174 -- # true 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:01.842 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:02.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:02.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.090 ms 00:13:02.102 00:13:02.102 --- 10.0.0.3 ping statistics --- 00:13:02.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.102 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:02.102 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:02.102 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:13:02.102 00:13:02.102 --- 10.0.0.4 ping statistics --- 00:13:02.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.102 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:02.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:02.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:13:02.102 00:13:02.102 --- 10.0.0.1 ping statistics --- 00:13:02.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.102 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:02.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:02.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:13:02.102 00:13:02.102 --- 10.0.0.2 ping statistics --- 00:13:02.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:02.102 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@461 -- # return 0 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=69380 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 69380 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 69380 ']' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:02.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:02.102 01:33:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:02.102 [2024-11-17 01:33:10.541620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:02.102 [2024-11-17 01:33:10.541785] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:02.362 [2024-11-17 01:33:10.731541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:02.621 [2024-11-17 01:33:10.862725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:02.621 [2024-11-17 01:33:10.862838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:02.621 [2024-11-17 01:33:10.862877] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:02.621 [2024-11-17 01:33:10.862892] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:02.621 [2024-11-17 01:33:10.862907] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:02.621 [2024-11-17 01:33:10.865144] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:02.621 [2024-11-17 01:33:10.865320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:02.621 [2024-11-17 01:33:10.865645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.621 [2024-11-17 01:33:10.865650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:02.621 [2024-11-17 01:33:11.078228] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:03.190 [2024-11-17 01:33:11.589064] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.190 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 Malloc0 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:03.448 [2024-11-17 01:33:11.704754] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:03.448 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:03.448 { 00:13:03.448 "params": { 00:13:03.448 "name": "Nvme$subsystem", 00:13:03.448 "trtype": "$TEST_TRANSPORT", 00:13:03.448 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:03.448 "adrfam": "ipv4", 00:13:03.448 "trsvcid": "$NVMF_PORT", 00:13:03.448 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:03.448 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:03.448 "hdgst": ${hdgst:-false}, 00:13:03.448 "ddgst": ${ddgst:-false} 00:13:03.448 }, 00:13:03.449 "method": "bdev_nvme_attach_controller" 00:13:03.449 } 00:13:03.449 EOF 00:13:03.449 )") 00:13:03.449 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:03.449 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:03.449 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:03.449 01:33:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:03.449 "params": { 00:13:03.449 "name": "Nvme1", 00:13:03.449 "trtype": "tcp", 00:13:03.449 "traddr": "10.0.0.3", 00:13:03.449 "adrfam": "ipv4", 00:13:03.449 "trsvcid": "4420", 00:13:03.449 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:03.449 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:03.449 "hdgst": false, 00:13:03.449 "ddgst": false 00:13:03.449 }, 00:13:03.449 "method": "bdev_nvme_attach_controller" 00:13:03.449 }' 00:13:03.449 [2024-11-17 01:33:11.819713] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:13:03.449 [2024-11-17 01:33:11.819916] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69416 ] 00:13:03.707 [2024-11-17 01:33:12.006516] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:03.707 [2024-11-17 01:33:12.140388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.707 [2024-11-17 01:33:12.140508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.707 [2024-11-17 01:33:12.140519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.966 [2024-11-17 01:33:12.338780] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:13:04.225 I/O targets: 00:13:04.225 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:04.225 00:13:04.225 00:13:04.225 CUnit - A unit testing framework for C - Version 2.1-3 00:13:04.225 http://cunit.sourceforge.net/ 00:13:04.225 00:13:04.225 00:13:04.225 Suite: bdevio tests on: Nvme1n1 00:13:04.225 Test: blockdev write read block ...passed 00:13:04.225 Test: blockdev write zeroes read block ...passed 00:13:04.225 Test: blockdev write zeroes read no split ...passed 00:13:04.225 Test: blockdev write zeroes read split ...passed 00:13:04.225 Test: blockdev write zeroes read split partial ...passed 00:13:04.225 Test: blockdev reset ...[2024-11-17 01:33:12.605283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:04.225 [2024-11-17 01:33:12.605615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b280 (9): Bad file descriptor 00:13:04.225 [2024-11-17 01:33:12.622020] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:04.225 passed 00:13:04.225 Test: blockdev write read 8 blocks ...passed 00:13:04.225 Test: blockdev write read size > 128k ...passed 00:13:04.225 Test: blockdev write read invalid size ...passed 00:13:04.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:04.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:04.225 Test: blockdev write read max offset ...passed 00:13:04.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:04.225 Test: blockdev writev readv 8 blocks ...passed 00:13:04.225 Test: blockdev writev readv 30 x 1block ...passed 00:13:04.225 Test: blockdev writev readv block ...passed 00:13:04.225 Test: blockdev writev readv size > 128k ...passed 00:13:04.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:04.225 Test: blockdev comparev and writev ...[2024-11-17 01:33:12.636116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.636321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.636372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.636649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.637096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.637199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.637218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:04.225 passed 00:13:04.225 Test: blockdev nvme passthru rw ...[2024-11-17 01:33:12.637891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.637930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.637956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.637977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.638334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.638363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.638387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:04.225 [2024-11-17 01:33:12.638404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:04.225 passed 00:13:04.225 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:33:12.639855] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:04.225 [2024-11-17 01:33:12.639996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.640389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:04.225 [2024-11-17 01:33:12.640446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:04.225 [2024-11-17 01:33:12.640663] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:04.225 [2024-11-17 01:33:12.640693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:04.225 passed 00:13:04.225 Test: blockdev nvme admin passthru ...[2024-11-17 01:33:12.641066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:04.225 [2024-11-17 01:33:12.641106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:04.225 passed 00:13:04.225 Test: blockdev copy ...passed 00:13:04.225 00:13:04.225 Run Summary: Type Total Ran Passed Failed Inactive 00:13:04.225 suites 1 1 n/a 0 0 00:13:04.225 tests 23 23 23 0 0 00:13:04.225 asserts 152 152 152 0 n/a 00:13:04.225 00:13:04.225 Elapsed time = 0.301 seconds 00:13:05.161 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:05.161 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.161 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:05.420 rmmod nvme_tcp 00:13:05.420 rmmod nvme_fabrics 00:13:05.420 rmmod nvme_keyring 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 69380 ']' 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 69380 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 69380 ']' 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 69380 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69380 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:13:05.420 killing process with pid 69380 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69380' 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 69380 00:13:05.420 01:33:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 69380 00:13:06.357 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:06.357 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:06.357 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:06.357 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:13:06.616 01:33:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:13:06.616 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:06.616 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:06.616 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@246 -- # remove_spdk_ns 00:13:06.616 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:06.616 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:06.616 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@300 -- # return 0 00:13:06.875 00:13:06.875 real 0m5.301s 00:13:06.875 user 0m19.426s 00:13:06.875 sys 0m1.078s 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.875 ************************************ 00:13:06.875 END TEST nvmf_bdevio 00:13:06.875 ************************************ 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:06.875 00:13:06.875 real 2m55.382s 00:13:06.875 user 7m46.914s 00:13:06.875 sys 0m55.270s 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:06.875 ************************************ 00:13:06.875 END TEST nvmf_target_core 00:13:06.875 ************************************ 00:13:06.875 01:33:15 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:06.875 01:33:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:06.875 01:33:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:06.875 01:33:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:06.875 ************************************ 00:13:06.875 START TEST nvmf_target_extra 00:13:06.875 ************************************ 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:06.875 * Looking for test storage... 00:13:06.875 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:13:06.875 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.135 --rc genhtml_branch_coverage=1 00:13:07.135 --rc genhtml_function_coverage=1 00:13:07.135 --rc genhtml_legend=1 00:13:07.135 --rc geninfo_all_blocks=1 00:13:07.135 --rc geninfo_unexecuted_blocks=1 00:13:07.135 00:13:07.135 ' 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.135 --rc genhtml_branch_coverage=1 00:13:07.135 --rc genhtml_function_coverage=1 00:13:07.135 --rc genhtml_legend=1 00:13:07.135 --rc geninfo_all_blocks=1 00:13:07.135 --rc geninfo_unexecuted_blocks=1 00:13:07.135 00:13:07.135 ' 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.135 --rc genhtml_branch_coverage=1 00:13:07.135 --rc genhtml_function_coverage=1 00:13:07.135 --rc genhtml_legend=1 00:13:07.135 --rc geninfo_all_blocks=1 00:13:07.135 --rc geninfo_unexecuted_blocks=1 00:13:07.135 00:13:07.135 ' 00:13:07.135 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.136 --rc genhtml_branch_coverage=1 00:13:07.136 --rc genhtml_function_coverage=1 00:13:07.136 --rc genhtml_legend=1 00:13:07.136 --rc geninfo_all_blocks=1 00:13:07.136 --rc geninfo_unexecuted_blocks=1 00:13:07.136 00:13:07.136 ' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.136 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 1 -eq 0 ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:07.136 ************************************ 00:13:07.136 START TEST nvmf_auth_target 00:13:07.136 ************************************ 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:07.136 * Looking for test storage... 00:13:07.136 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.136 --rc genhtml_branch_coverage=1 00:13:07.136 --rc genhtml_function_coverage=1 00:13:07.136 --rc genhtml_legend=1 00:13:07.136 --rc geninfo_all_blocks=1 00:13:07.136 --rc geninfo_unexecuted_blocks=1 00:13:07.136 00:13:07.136 ' 00:13:07.136 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:07.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.136 --rc genhtml_branch_coverage=1 00:13:07.137 --rc genhtml_function_coverage=1 00:13:07.137 --rc genhtml_legend=1 00:13:07.137 --rc geninfo_all_blocks=1 00:13:07.137 --rc geninfo_unexecuted_blocks=1 00:13:07.137 00:13:07.137 ' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:07.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.137 --rc genhtml_branch_coverage=1 00:13:07.137 --rc genhtml_function_coverage=1 00:13:07.137 --rc genhtml_legend=1 00:13:07.137 --rc geninfo_all_blocks=1 00:13:07.137 --rc geninfo_unexecuted_blocks=1 00:13:07.137 00:13:07.137 ' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:07.137 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:07.137 --rc genhtml_branch_coverage=1 00:13:07.137 --rc genhtml_function_coverage=1 00:13:07.137 --rc genhtml_legend=1 00:13:07.137 --rc geninfo_all_blocks=1 00:13:07.137 --rc geninfo_unexecuted_blocks=1 00:13:07.137 00:13:07.137 ' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:07.137 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:07.137 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:13:07.397 Cannot find device "nvmf_init_br" 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # true 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:13:07.397 Cannot find device "nvmf_init_br2" 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # true 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:13:07.397 Cannot find device "nvmf_tgt_br" 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@164 -- # true 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:13:07.397 Cannot find device "nvmf_tgt_br2" 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@165 -- # true 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:13:07.397 Cannot find device "nvmf_init_br" 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@166 -- # true 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:13:07.397 Cannot find device "nvmf_init_br2" 00:13:07.397 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@167 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:13:07.398 Cannot find device "nvmf_tgt_br" 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@168 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:13:07.398 Cannot find device "nvmf_tgt_br2" 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:13:07.398 Cannot find device "nvmf_br" 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@170 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:13:07.398 Cannot find device "nvmf_init_if" 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:13:07.398 Cannot find device "nvmf_init_if2" 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:13:07.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@173 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:13:07.398 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@174 -- # true 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:13:07.398 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:13:07.657 01:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:13:07.657 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:13:07.657 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.104 ms 00:13:07.657 00:13:07.657 --- 10.0.0.3 ping statistics --- 00:13:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.657 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:13:07.657 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:13:07.657 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:13:07.657 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.052 ms 00:13:07.657 00:13:07.657 --- 10.0.0.4 ping statistics --- 00:13:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.657 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:13:07.657 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:13:07.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:07.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:13:07.657 00:13:07.657 --- 10.0.0.1 ping statistics --- 00:13:07.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.657 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:13:07.657 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:13:07.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:07.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:13:07.658 00:13:07.658 --- 10.0.0.2 ping statistics --- 00:13:07.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:07.658 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@461 -- # return 0 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=69746 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 69746 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69746 ']' 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:07.658 01:33:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=69784 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=fb05a7c08c2b0f9a5b63b171c5e189d39875b924a31f2f8d 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.cex 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key fb05a7c08c2b0f9a5b63b171c5e189d39875b924a31f2f8d 0 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 fb05a7c08c2b0f9a5b63b171c5e189d39875b924a31f2f8d 0 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=fb05a7c08c2b0f9a5b63b171c5e189d39875b924a31f2f8d 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.cex 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.cex 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.cex 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.038 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=89cb145c8cbfd6c1447f32b7ed7f87a1f9dfe4c18f72f6e1ee319f3a10a41a4a 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.bVq 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 89cb145c8cbfd6c1447f32b7ed7f87a1f9dfe4c18f72f6e1ee319f3a10a41a4a 3 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 89cb145c8cbfd6c1447f32b7ed7f87a1f9dfe4c18f72f6e1ee319f3a10a41a4a 3 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=89cb145c8cbfd6c1447f32b7ed7f87a1f9dfe4c18f72f6e1ee319f3a10a41a4a 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.bVq 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.bVq 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.bVq 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=94c2a28de9c2e363ff98df7edf6f62a8 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.EON 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 94c2a28de9c2e363ff98df7edf6f62a8 1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 94c2a28de9c2e363ff98df7edf6f62a8 1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=94c2a28de9c2e363ff98df7edf6f62a8 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.EON 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.EON 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.EON 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=bcbb5b87b1bd8effdec595aac7f29f81fc966494fb1f3e32 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.fcs 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key bcbb5b87b1bd8effdec595aac7f29f81fc966494fb1f3e32 2 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 bcbb5b87b1bd8effdec595aac7f29f81fc966494fb1f3e32 2 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=bcbb5b87b1bd8effdec595aac7f29f81fc966494fb1f3e32 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.fcs 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.fcs 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.fcs 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=a7800a28aa0eca244fbb16827078d0dbb6cf4b5a93bf345e 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.UDa 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key a7800a28aa0eca244fbb16827078d0dbb6cf4b5a93bf345e 2 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 a7800a28aa0eca244fbb16827078d0dbb6cf4b5a93bf345e 2 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=a7800a28aa0eca244fbb16827078d0dbb6cf4b5a93bf345e 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:13:09.039 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.298 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.UDa 00:13:09.298 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.UDa 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.UDa 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=375f0cbdb21707ac6a8061576be5ec3d 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.d7T 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 375f0cbdb21707ac6a8061576be5ec3d 1 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 375f0cbdb21707ac6a8061576be5ec3d 1 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=375f0cbdb21707ac6a8061576be5ec3d 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.d7T 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.d7T 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.d7T 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=7f70d2008df1bed0b693db3d34a6e290ca94ab17b5f16c01f35982dc0c9bb9bd 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.LKh 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 7f70d2008df1bed0b693db3d34a6e290ca94ab17b5f16c01f35982dc0c9bb9bd 3 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 7f70d2008df1bed0b693db3d34a6e290ca94ab17b5f16c01f35982dc0c9bb9bd 3 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=7f70d2008df1bed0b693db3d34a6e290ca94ab17b5f16c01f35982dc0c9bb9bd 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.LKh 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.LKh 00:13:09.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.LKh 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 69746 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69746 ']' 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.299 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:09.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 69784 /var/tmp/host.sock 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 69784 ']' 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.559 01:33:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cex 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.cex 00:13:10.127 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.cex 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.bVq ]] 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bVq 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bVq 00:13:10.386 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bVq 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.EON 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.EON 00:13:10.646 01:33:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.EON 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.fcs ]] 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fcs 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fcs 00:13:10.912 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fcs 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UDa 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.UDa 00:13:11.172 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.UDa 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.d7T ]] 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d7T 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d7T 00:13:11.431 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d7T 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LKh 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.LKh 00:13:11.690 01:33:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.LKh 00:13:11.948 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:13:11.948 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:13:11.948 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:11.948 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:11.948 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:11.948 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.207 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:12.466 00:13:12.466 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:12.466 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:12.466 01:33:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:12.725 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:12.725 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:12.725 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.725 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:12.725 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.725 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:12.725 { 00:13:12.725 "cntlid": 1, 00:13:12.725 "qid": 0, 00:13:12.725 "state": "enabled", 00:13:12.725 "thread": "nvmf_tgt_poll_group_000", 00:13:12.725 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:12.725 "listen_address": { 00:13:12.725 "trtype": "TCP", 00:13:12.725 "adrfam": "IPv4", 00:13:12.725 "traddr": "10.0.0.3", 00:13:12.725 "trsvcid": "4420" 00:13:12.725 }, 00:13:12.725 "peer_address": { 00:13:12.725 "trtype": "TCP", 00:13:12.725 "adrfam": "IPv4", 00:13:12.725 "traddr": "10.0.0.1", 00:13:12.726 "trsvcid": "45228" 00:13:12.726 }, 00:13:12.726 "auth": { 00:13:12.726 "state": "completed", 00:13:12.726 "digest": "sha256", 00:13:12.726 "dhgroup": "null" 00:13:12.726 } 00:13:12.726 } 00:13:12.726 ]' 00:13:12.726 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:13.024 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:13.309 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:13.309 01:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:17.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.502 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:17.503 01:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:18.070 00:13:18.070 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:18.070 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:18.070 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:18.329 { 00:13:18.329 "cntlid": 3, 00:13:18.329 "qid": 0, 00:13:18.329 "state": "enabled", 00:13:18.329 "thread": "nvmf_tgt_poll_group_000", 00:13:18.329 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:18.329 "listen_address": { 00:13:18.329 "trtype": "TCP", 00:13:18.329 "adrfam": "IPv4", 00:13:18.329 "traddr": "10.0.0.3", 00:13:18.329 "trsvcid": "4420" 00:13:18.329 }, 00:13:18.329 "peer_address": { 00:13:18.329 "trtype": "TCP", 00:13:18.329 "adrfam": "IPv4", 00:13:18.329 "traddr": "10.0.0.1", 00:13:18.329 "trsvcid": "49714" 00:13:18.329 }, 00:13:18.329 "auth": { 00:13:18.329 "state": "completed", 00:13:18.329 "digest": "sha256", 00:13:18.329 "dhgroup": "null" 00:13:18.329 } 00:13:18.329 } 00:13:18.329 ]' 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:18.329 01:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:18.589 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:18.589 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:19.525 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:19.525 01:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:19.784 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:20.043 00:13:20.043 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:20.043 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:20.043 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:20.302 { 00:13:20.302 "cntlid": 5, 00:13:20.302 "qid": 0, 00:13:20.302 "state": "enabled", 00:13:20.302 "thread": "nvmf_tgt_poll_group_000", 00:13:20.302 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:20.302 "listen_address": { 00:13:20.302 "trtype": "TCP", 00:13:20.302 "adrfam": "IPv4", 00:13:20.302 "traddr": "10.0.0.3", 00:13:20.302 "trsvcid": "4420" 00:13:20.302 }, 00:13:20.302 "peer_address": { 00:13:20.302 "trtype": "TCP", 00:13:20.302 "adrfam": "IPv4", 00:13:20.302 "traddr": "10.0.0.1", 00:13:20.302 "trsvcid": "49742" 00:13:20.302 }, 00:13:20.302 "auth": { 00:13:20.302 "state": "completed", 00:13:20.302 "digest": "sha256", 00:13:20.302 "dhgroup": "null" 00:13:20.302 } 00:13:20.302 } 00:13:20.302 ]' 00:13:20.302 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:20.561 01:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:20.825 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:20.825 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:21.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:21.763 01:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:21.763 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:21.764 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:22.332 00:13:22.332 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:22.332 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:22.332 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:22.591 { 00:13:22.591 "cntlid": 7, 00:13:22.591 "qid": 0, 00:13:22.591 "state": "enabled", 00:13:22.591 "thread": "nvmf_tgt_poll_group_000", 00:13:22.591 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:22.591 "listen_address": { 00:13:22.591 "trtype": "TCP", 00:13:22.591 "adrfam": "IPv4", 00:13:22.591 "traddr": "10.0.0.3", 00:13:22.591 "trsvcid": "4420" 00:13:22.591 }, 00:13:22.591 "peer_address": { 00:13:22.591 "trtype": "TCP", 00:13:22.591 "adrfam": "IPv4", 00:13:22.591 "traddr": "10.0.0.1", 00:13:22.591 "trsvcid": "49774" 00:13:22.591 }, 00:13:22.591 "auth": { 00:13:22.591 "state": "completed", 00:13:22.591 "digest": "sha256", 00:13:22.591 "dhgroup": "null" 00:13:22.591 } 00:13:22.591 } 00:13:22.591 ]' 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:13:22.591 01:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:22.591 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:22.591 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:22.591 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:23.160 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:23.160 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:23.727 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.727 01:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:23.986 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:13:23.986 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:23.986 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:23.986 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:23.986 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:23.986 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:23.987 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:24.246 00:13:24.246 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:24.246 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:24.246 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:24.505 { 00:13:24.505 "cntlid": 9, 00:13:24.505 "qid": 0, 00:13:24.505 "state": "enabled", 00:13:24.505 "thread": "nvmf_tgt_poll_group_000", 00:13:24.505 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:24.505 "listen_address": { 00:13:24.505 "trtype": "TCP", 00:13:24.505 "adrfam": "IPv4", 00:13:24.505 "traddr": "10.0.0.3", 00:13:24.505 "trsvcid": "4420" 00:13:24.505 }, 00:13:24.505 "peer_address": { 00:13:24.505 "trtype": "TCP", 00:13:24.505 "adrfam": "IPv4", 00:13:24.505 "traddr": "10.0.0.1", 00:13:24.505 "trsvcid": "49802" 00:13:24.505 }, 00:13:24.505 "auth": { 00:13:24.505 "state": "completed", 00:13:24.505 "digest": "sha256", 00:13:24.505 "dhgroup": "ffdhe2048" 00:13:24.505 } 00:13:24.505 } 00:13:24.505 ]' 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:24.505 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:24.764 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:24.764 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:24.764 01:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:25.023 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:25.023 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:25.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:25.591 01:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:25.849 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:26.108 00:13:26.367 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:26.367 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:26.367 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:26.626 { 00:13:26.626 "cntlid": 11, 00:13:26.626 "qid": 0, 00:13:26.626 "state": "enabled", 00:13:26.626 "thread": "nvmf_tgt_poll_group_000", 00:13:26.626 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:26.626 "listen_address": { 00:13:26.626 "trtype": "TCP", 00:13:26.626 "adrfam": "IPv4", 00:13:26.626 "traddr": "10.0.0.3", 00:13:26.626 "trsvcid": "4420" 00:13:26.626 }, 00:13:26.626 "peer_address": { 00:13:26.626 "trtype": "TCP", 00:13:26.626 "adrfam": "IPv4", 00:13:26.626 "traddr": "10.0.0.1", 00:13:26.626 "trsvcid": "59188" 00:13:26.626 }, 00:13:26.626 "auth": { 00:13:26.626 "state": "completed", 00:13:26.626 "digest": "sha256", 00:13:26.626 "dhgroup": "ffdhe2048" 00:13:26.626 } 00:13:26.626 } 00:13:26.626 ]' 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:26.626 01:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:26.884 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:26.884 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:27.453 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:27.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:27.453 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:27.453 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.453 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:27.744 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.744 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:27.744 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:27.744 01:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.003 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:28.262 00:13:28.262 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:28.262 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:28.262 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:28.521 { 00:13:28.521 "cntlid": 13, 00:13:28.521 "qid": 0, 00:13:28.521 "state": "enabled", 00:13:28.521 "thread": "nvmf_tgt_poll_group_000", 00:13:28.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:28.521 "listen_address": { 00:13:28.521 "trtype": "TCP", 00:13:28.521 "adrfam": "IPv4", 00:13:28.521 "traddr": "10.0.0.3", 00:13:28.521 "trsvcid": "4420" 00:13:28.521 }, 00:13:28.521 "peer_address": { 00:13:28.521 "trtype": "TCP", 00:13:28.521 "adrfam": "IPv4", 00:13:28.521 "traddr": "10.0.0.1", 00:13:28.521 "trsvcid": "59220" 00:13:28.521 }, 00:13:28.521 "auth": { 00:13:28.521 "state": "completed", 00:13:28.521 "digest": "sha256", 00:13:28.521 "dhgroup": "ffdhe2048" 00:13:28.521 } 00:13:28.521 } 00:13:28.521 ]' 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:28.521 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:28.780 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:28.780 01:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:28.780 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:28.780 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:28.780 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:29.039 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:29.039 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:29.606 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:29.606 01:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:29.865 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:30.124 00:13:30.124 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:30.124 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:30.124 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:30.692 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:30.692 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:30.692 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.692 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:30.692 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.692 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:30.692 { 00:13:30.692 "cntlid": 15, 00:13:30.692 "qid": 0, 00:13:30.692 "state": "enabled", 00:13:30.692 "thread": "nvmf_tgt_poll_group_000", 00:13:30.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:30.693 "listen_address": { 00:13:30.693 "trtype": "TCP", 00:13:30.693 "adrfam": "IPv4", 00:13:30.693 "traddr": "10.0.0.3", 00:13:30.693 "trsvcid": "4420" 00:13:30.693 }, 00:13:30.693 "peer_address": { 00:13:30.693 "trtype": "TCP", 00:13:30.693 "adrfam": "IPv4", 00:13:30.693 "traddr": "10.0.0.1", 00:13:30.693 "trsvcid": "59242" 00:13:30.693 }, 00:13:30.693 "auth": { 00:13:30.693 "state": "completed", 00:13:30.693 "digest": "sha256", 00:13:30.693 "dhgroup": "ffdhe2048" 00:13:30.693 } 00:13:30.693 } 00:13:30.693 ]' 00:13:30.693 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:30.693 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:30.693 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:30.693 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:13:30.693 01:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:30.693 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:30.693 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:30.693 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:30.953 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:30.953 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:31.888 01:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:31.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:31.888 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:32.147 00:13:32.405 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:32.405 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:32.405 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:32.663 { 00:13:32.663 "cntlid": 17, 00:13:32.663 "qid": 0, 00:13:32.663 "state": "enabled", 00:13:32.663 "thread": "nvmf_tgt_poll_group_000", 00:13:32.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:32.663 "listen_address": { 00:13:32.663 "trtype": "TCP", 00:13:32.663 "adrfam": "IPv4", 00:13:32.663 "traddr": "10.0.0.3", 00:13:32.663 "trsvcid": "4420" 00:13:32.663 }, 00:13:32.663 "peer_address": { 00:13:32.663 "trtype": "TCP", 00:13:32.663 "adrfam": "IPv4", 00:13:32.663 "traddr": "10.0.0.1", 00:13:32.663 "trsvcid": "59268" 00:13:32.663 }, 00:13:32.663 "auth": { 00:13:32.663 "state": "completed", 00:13:32.663 "digest": "sha256", 00:13:32.663 "dhgroup": "ffdhe3072" 00:13:32.663 } 00:13:32.663 } 00:13:32.663 ]' 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:32.663 01:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:32.663 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:32.663 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:32.663 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:32.663 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:32.663 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:32.922 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:32.922 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:33.857 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:33.857 01:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:33.857 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:13:33.857 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:33.857 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:33.857 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:33.858 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:34.423 00:13:34.424 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:34.424 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:34.424 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:34.682 { 00:13:34.682 "cntlid": 19, 00:13:34.682 "qid": 0, 00:13:34.682 "state": "enabled", 00:13:34.682 "thread": "nvmf_tgt_poll_group_000", 00:13:34.682 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:34.682 "listen_address": { 00:13:34.682 "trtype": "TCP", 00:13:34.682 "adrfam": "IPv4", 00:13:34.682 "traddr": "10.0.0.3", 00:13:34.682 "trsvcid": "4420" 00:13:34.682 }, 00:13:34.682 "peer_address": { 00:13:34.682 "trtype": "TCP", 00:13:34.682 "adrfam": "IPv4", 00:13:34.682 "traddr": "10.0.0.1", 00:13:34.682 "trsvcid": "59298" 00:13:34.682 }, 00:13:34.682 "auth": { 00:13:34.682 "state": "completed", 00:13:34.682 "digest": "sha256", 00:13:34.682 "dhgroup": "ffdhe3072" 00:13:34.682 } 00:13:34.682 } 00:13:34.682 ]' 00:13:34.682 01:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:34.682 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:34.682 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:34.682 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:34.682 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:34.940 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:34.940 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:34.940 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:35.198 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:35.199 01:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:35.764 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:35.764 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.023 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:36.589 00:13:36.589 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:36.589 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:36.589 01:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:36.847 { 00:13:36.847 "cntlid": 21, 00:13:36.847 "qid": 0, 00:13:36.847 "state": "enabled", 00:13:36.847 "thread": "nvmf_tgt_poll_group_000", 00:13:36.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:36.847 "listen_address": { 00:13:36.847 "trtype": "TCP", 00:13:36.847 "adrfam": "IPv4", 00:13:36.847 "traddr": "10.0.0.3", 00:13:36.847 "trsvcid": "4420" 00:13:36.847 }, 00:13:36.847 "peer_address": { 00:13:36.847 "trtype": "TCP", 00:13:36.847 "adrfam": "IPv4", 00:13:36.847 "traddr": "10.0.0.1", 00:13:36.847 "trsvcid": "35298" 00:13:36.847 }, 00:13:36.847 "auth": { 00:13:36.847 "state": "completed", 00:13:36.847 "digest": "sha256", 00:13:36.847 "dhgroup": "ffdhe3072" 00:13:36.847 } 00:13:36.847 } 00:13:36.847 ]' 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:36.847 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:37.105 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:37.105 01:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:38.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.040 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.299 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:38.558 00:13:38.558 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:38.558 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:38.558 01:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:38.816 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:38.816 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:38.816 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.816 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:39.074 { 00:13:39.074 "cntlid": 23, 00:13:39.074 "qid": 0, 00:13:39.074 "state": "enabled", 00:13:39.074 "thread": "nvmf_tgt_poll_group_000", 00:13:39.074 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:39.074 "listen_address": { 00:13:39.074 "trtype": "TCP", 00:13:39.074 "adrfam": "IPv4", 00:13:39.074 "traddr": "10.0.0.3", 00:13:39.074 "trsvcid": "4420" 00:13:39.074 }, 00:13:39.074 "peer_address": { 00:13:39.074 "trtype": "TCP", 00:13:39.074 "adrfam": "IPv4", 00:13:39.074 "traddr": "10.0.0.1", 00:13:39.074 "trsvcid": "35324" 00:13:39.074 }, 00:13:39.074 "auth": { 00:13:39.074 "state": "completed", 00:13:39.074 "digest": "sha256", 00:13:39.074 "dhgroup": "ffdhe3072" 00:13:39.074 } 00:13:39.074 } 00:13:39.074 ]' 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:39.074 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:39.333 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:39.333 01:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:39.912 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:39.912 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:39.912 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:39.912 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.912 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.170 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.171 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:40.171 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:40.171 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:40.171 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.429 01:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:40.688 00:13:40.688 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:40.688 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:40.688 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:40.946 { 00:13:40.946 "cntlid": 25, 00:13:40.946 "qid": 0, 00:13:40.946 "state": "enabled", 00:13:40.946 "thread": "nvmf_tgt_poll_group_000", 00:13:40.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:40.946 "listen_address": { 00:13:40.946 "trtype": "TCP", 00:13:40.946 "adrfam": "IPv4", 00:13:40.946 "traddr": "10.0.0.3", 00:13:40.946 "trsvcid": "4420" 00:13:40.946 }, 00:13:40.946 "peer_address": { 00:13:40.946 "trtype": "TCP", 00:13:40.946 "adrfam": "IPv4", 00:13:40.946 "traddr": "10.0.0.1", 00:13:40.946 "trsvcid": "35364" 00:13:40.946 }, 00:13:40.946 "auth": { 00:13:40.946 "state": "completed", 00:13:40.946 "digest": "sha256", 00:13:40.946 "dhgroup": "ffdhe4096" 00:13:40.946 } 00:13:40.946 } 00:13:40.946 ]' 00:13:40.946 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:41.204 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:41.463 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:41.463 01:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:42.399 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:42.399 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.658 01:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:42.917 00:13:42.917 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:42.918 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:42.918 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:43.176 { 00:13:43.176 "cntlid": 27, 00:13:43.176 "qid": 0, 00:13:43.176 "state": "enabled", 00:13:43.176 "thread": "nvmf_tgt_poll_group_000", 00:13:43.176 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:43.176 "listen_address": { 00:13:43.176 "trtype": "TCP", 00:13:43.176 "adrfam": "IPv4", 00:13:43.176 "traddr": "10.0.0.3", 00:13:43.176 "trsvcid": "4420" 00:13:43.176 }, 00:13:43.176 "peer_address": { 00:13:43.176 "trtype": "TCP", 00:13:43.176 "adrfam": "IPv4", 00:13:43.176 "traddr": "10.0.0.1", 00:13:43.176 "trsvcid": "35398" 00:13:43.176 }, 00:13:43.176 "auth": { 00:13:43.176 "state": "completed", 00:13:43.176 "digest": "sha256", 00:13:43.176 "dhgroup": "ffdhe4096" 00:13:43.176 } 00:13:43.176 } 00:13:43.176 ]' 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:43.176 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:43.434 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:43.434 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:43.434 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:43.434 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:43.434 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:43.692 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:43.692 01:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:44.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.259 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.518 01:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:44.778 00:13:44.778 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:44.778 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:44.778 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:45.036 { 00:13:45.036 "cntlid": 29, 00:13:45.036 "qid": 0, 00:13:45.036 "state": "enabled", 00:13:45.036 "thread": "nvmf_tgt_poll_group_000", 00:13:45.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:45.036 "listen_address": { 00:13:45.036 "trtype": "TCP", 00:13:45.036 "adrfam": "IPv4", 00:13:45.036 "traddr": "10.0.0.3", 00:13:45.036 "trsvcid": "4420" 00:13:45.036 }, 00:13:45.036 "peer_address": { 00:13:45.036 "trtype": "TCP", 00:13:45.036 "adrfam": "IPv4", 00:13:45.036 "traddr": "10.0.0.1", 00:13:45.036 "trsvcid": "35428" 00:13:45.036 }, 00:13:45.036 "auth": { 00:13:45.036 "state": "completed", 00:13:45.036 "digest": "sha256", 00:13:45.036 "dhgroup": "ffdhe4096" 00:13:45.036 } 00:13:45.036 } 00:13:45.036 ]' 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:45.036 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:45.295 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:45.295 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:45.295 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:45.295 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:45.295 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:45.554 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:45.554 01:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:46.121 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:46.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:46.121 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:46.121 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.121 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.379 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.379 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:46.379 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.380 01:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:46.947 00:13:46.947 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:46.947 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:46.947 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:47.206 { 00:13:47.206 "cntlid": 31, 00:13:47.206 "qid": 0, 00:13:47.206 "state": "enabled", 00:13:47.206 "thread": "nvmf_tgt_poll_group_000", 00:13:47.206 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:47.206 "listen_address": { 00:13:47.206 "trtype": "TCP", 00:13:47.206 "adrfam": "IPv4", 00:13:47.206 "traddr": "10.0.0.3", 00:13:47.206 "trsvcid": "4420" 00:13:47.206 }, 00:13:47.206 "peer_address": { 00:13:47.206 "trtype": "TCP", 00:13:47.206 "adrfam": "IPv4", 00:13:47.206 "traddr": "10.0.0.1", 00:13:47.206 "trsvcid": "46148" 00:13:47.206 }, 00:13:47.206 "auth": { 00:13:47.206 "state": "completed", 00:13:47.206 "digest": "sha256", 00:13:47.206 "dhgroup": "ffdhe4096" 00:13:47.206 } 00:13:47.206 } 00:13:47.206 ]' 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:13:47.206 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:47.464 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:47.464 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:47.464 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:47.723 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:47.723 01:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:48.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:48.290 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:48.549 01:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:49.116 00:13:49.116 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:49.116 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:49.116 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:49.375 { 00:13:49.375 "cntlid": 33, 00:13:49.375 "qid": 0, 00:13:49.375 "state": "enabled", 00:13:49.375 "thread": "nvmf_tgt_poll_group_000", 00:13:49.375 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:49.375 "listen_address": { 00:13:49.375 "trtype": "TCP", 00:13:49.375 "adrfam": "IPv4", 00:13:49.375 "traddr": "10.0.0.3", 00:13:49.375 "trsvcid": "4420" 00:13:49.375 }, 00:13:49.375 "peer_address": { 00:13:49.375 "trtype": "TCP", 00:13:49.375 "adrfam": "IPv4", 00:13:49.375 "traddr": "10.0.0.1", 00:13:49.375 "trsvcid": "46188" 00:13:49.375 }, 00:13:49.375 "auth": { 00:13:49.375 "state": "completed", 00:13:49.375 "digest": "sha256", 00:13:49.375 "dhgroup": "ffdhe6144" 00:13:49.375 } 00:13:49.375 } 00:13:49.375 ]' 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:49.375 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:49.376 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:49.376 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:49.376 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:49.376 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:49.376 01:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:49.634 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:49.634 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:50.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:50.202 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:50.461 01:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:51.028 00:13:51.028 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:51.028 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:51.028 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:51.286 { 00:13:51.286 "cntlid": 35, 00:13:51.286 "qid": 0, 00:13:51.286 "state": "enabled", 00:13:51.286 "thread": "nvmf_tgt_poll_group_000", 00:13:51.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:51.286 "listen_address": { 00:13:51.286 "trtype": "TCP", 00:13:51.286 "adrfam": "IPv4", 00:13:51.286 "traddr": "10.0.0.3", 00:13:51.286 "trsvcid": "4420" 00:13:51.286 }, 00:13:51.286 "peer_address": { 00:13:51.286 "trtype": "TCP", 00:13:51.286 "adrfam": "IPv4", 00:13:51.286 "traddr": "10.0.0.1", 00:13:51.286 "trsvcid": "46210" 00:13:51.286 }, 00:13:51.286 "auth": { 00:13:51.286 "state": "completed", 00:13:51.286 "digest": "sha256", 00:13:51.286 "dhgroup": "ffdhe6144" 00:13:51.286 } 00:13:51.286 } 00:13:51.286 ]' 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:51.286 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:51.544 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:51.544 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:51.544 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:51.544 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:51.544 01:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:51.803 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:51.803 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:52.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:52.371 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:52.630 01:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:13:53.197 00:13:53.197 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:53.197 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:53.197 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:53.456 { 00:13:53.456 "cntlid": 37, 00:13:53.456 "qid": 0, 00:13:53.456 "state": "enabled", 00:13:53.456 "thread": "nvmf_tgt_poll_group_000", 00:13:53.456 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:53.456 "listen_address": { 00:13:53.456 "trtype": "TCP", 00:13:53.456 "adrfam": "IPv4", 00:13:53.456 "traddr": "10.0.0.3", 00:13:53.456 "trsvcid": "4420" 00:13:53.456 }, 00:13:53.456 "peer_address": { 00:13:53.456 "trtype": "TCP", 00:13:53.456 "adrfam": "IPv4", 00:13:53.456 "traddr": "10.0.0.1", 00:13:53.456 "trsvcid": "46240" 00:13:53.456 }, 00:13:53.456 "auth": { 00:13:53.456 "state": "completed", 00:13:53.456 "digest": "sha256", 00:13:53.456 "dhgroup": "ffdhe6144" 00:13:53.456 } 00:13:53.456 } 00:13:53.456 ]' 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:53.456 01:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:53.741 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:53.741 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:54.319 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.319 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:54.578 01:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:13:55.142 00:13:55.142 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:55.142 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:55.142 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:55.399 { 00:13:55.399 "cntlid": 39, 00:13:55.399 "qid": 0, 00:13:55.399 "state": "enabled", 00:13:55.399 "thread": "nvmf_tgt_poll_group_000", 00:13:55.399 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:55.399 "listen_address": { 00:13:55.399 "trtype": "TCP", 00:13:55.399 "adrfam": "IPv4", 00:13:55.399 "traddr": "10.0.0.3", 00:13:55.399 "trsvcid": "4420" 00:13:55.399 }, 00:13:55.399 "peer_address": { 00:13:55.399 "trtype": "TCP", 00:13:55.399 "adrfam": "IPv4", 00:13:55.399 "traddr": "10.0.0.1", 00:13:55.399 "trsvcid": "51816" 00:13:55.399 }, 00:13:55.399 "auth": { 00:13:55.399 "state": "completed", 00:13:55.399 "digest": "sha256", 00:13:55.399 "dhgroup": "ffdhe6144" 00:13:55.399 } 00:13:55.399 } 00:13:55.399 ]' 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:13:55.399 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:55.657 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:55.657 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:55.657 01:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:55.914 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:55.914 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:56.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:56.481 01:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:56.739 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:57.305 00:13:57.564 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:57.564 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:57.564 01:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:13:57.822 { 00:13:57.822 "cntlid": 41, 00:13:57.822 "qid": 0, 00:13:57.822 "state": "enabled", 00:13:57.822 "thread": "nvmf_tgt_poll_group_000", 00:13:57.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:13:57.822 "listen_address": { 00:13:57.822 "trtype": "TCP", 00:13:57.822 "adrfam": "IPv4", 00:13:57.822 "traddr": "10.0.0.3", 00:13:57.822 "trsvcid": "4420" 00:13:57.822 }, 00:13:57.822 "peer_address": { 00:13:57.822 "trtype": "TCP", 00:13:57.822 "adrfam": "IPv4", 00:13:57.822 "traddr": "10.0.0.1", 00:13:57.822 "trsvcid": "51836" 00:13:57.822 }, 00:13:57.822 "auth": { 00:13:57.822 "state": "completed", 00:13:57.822 "digest": "sha256", 00:13:57.822 "dhgroup": "ffdhe8192" 00:13:57.822 } 00:13:57.822 } 00:13:57.822 ]' 00:13:57.822 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:57.823 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:58.389 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:58.389 01:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:13:58.956 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:58.956 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.214 01:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:13:59.779 00:13:59.779 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:13:59.779 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.779 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:00.037 { 00:14:00.037 "cntlid": 43, 00:14:00.037 "qid": 0, 00:14:00.037 "state": "enabled", 00:14:00.037 "thread": "nvmf_tgt_poll_group_000", 00:14:00.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:00.037 "listen_address": { 00:14:00.037 "trtype": "TCP", 00:14:00.037 "adrfam": "IPv4", 00:14:00.037 "traddr": "10.0.0.3", 00:14:00.037 "trsvcid": "4420" 00:14:00.037 }, 00:14:00.037 "peer_address": { 00:14:00.037 "trtype": "TCP", 00:14:00.037 "adrfam": "IPv4", 00:14:00.037 "traddr": "10.0.0.1", 00:14:00.037 "trsvcid": "51850" 00:14:00.037 }, 00:14:00.037 "auth": { 00:14:00.037 "state": "completed", 00:14:00.037 "digest": "sha256", 00:14:00.037 "dhgroup": "ffdhe8192" 00:14:00.037 } 00:14:00.037 } 00:14:00.037 ]' 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:00.037 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:00.295 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:00.295 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:00.295 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:00.295 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:00.295 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:00.295 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:00.553 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:00.553 01:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:01.118 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:01.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.375 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:01.633 01:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.200 00:14:02.200 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:02.200 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:02.200 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:02.458 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:02.458 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:02.458 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.458 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.717 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.717 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:02.717 { 00:14:02.717 "cntlid": 45, 00:14:02.717 "qid": 0, 00:14:02.717 "state": "enabled", 00:14:02.717 "thread": "nvmf_tgt_poll_group_000", 00:14:02.717 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:02.717 "listen_address": { 00:14:02.717 "trtype": "TCP", 00:14:02.717 "adrfam": "IPv4", 00:14:02.717 "traddr": "10.0.0.3", 00:14:02.717 "trsvcid": "4420" 00:14:02.717 }, 00:14:02.717 "peer_address": { 00:14:02.717 "trtype": "TCP", 00:14:02.717 "adrfam": "IPv4", 00:14:02.717 "traddr": "10.0.0.1", 00:14:02.717 "trsvcid": "51892" 00:14:02.717 }, 00:14:02.717 "auth": { 00:14:02.717 "state": "completed", 00:14:02.717 "digest": "sha256", 00:14:02.717 "dhgroup": "ffdhe8192" 00:14:02.717 } 00:14:02.717 } 00:14:02.717 ]' 00:14:02.717 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:02.717 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:02.717 01:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:02.717 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:02.717 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:02.717 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:02.717 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:02.717 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:02.976 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:02.976 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:03.543 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:03.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:03.543 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:03.543 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.543 01:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.801 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.801 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:03.801 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:03.801 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.060 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:04.627 00:14:04.628 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:04.628 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:04.628 01:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:04.886 { 00:14:04.886 "cntlid": 47, 00:14:04.886 "qid": 0, 00:14:04.886 "state": "enabled", 00:14:04.886 "thread": "nvmf_tgt_poll_group_000", 00:14:04.886 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:04.886 "listen_address": { 00:14:04.886 "trtype": "TCP", 00:14:04.886 "adrfam": "IPv4", 00:14:04.886 "traddr": "10.0.0.3", 00:14:04.886 "trsvcid": "4420" 00:14:04.886 }, 00:14:04.886 "peer_address": { 00:14:04.886 "trtype": "TCP", 00:14:04.886 "adrfam": "IPv4", 00:14:04.886 "traddr": "10.0.0.1", 00:14:04.886 "trsvcid": "51916" 00:14:04.886 }, 00:14:04.886 "auth": { 00:14:04.886 "state": "completed", 00:14:04.886 "digest": "sha256", 00:14:04.886 "dhgroup": "ffdhe8192" 00:14:04.886 } 00:14:04.886 } 00:14:04.886 ]' 00:14:04.886 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.144 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:05.402 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:05.402 01:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:06.337 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:06.337 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:06.337 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:06.337 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.337 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.337 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.338 01:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:06.596 00:14:06.596 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:06.596 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:06.596 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:07.184 { 00:14:07.184 "cntlid": 49, 00:14:07.184 "qid": 0, 00:14:07.184 "state": "enabled", 00:14:07.184 "thread": "nvmf_tgt_poll_group_000", 00:14:07.184 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:07.184 "listen_address": { 00:14:07.184 "trtype": "TCP", 00:14:07.184 "adrfam": "IPv4", 00:14:07.184 "traddr": "10.0.0.3", 00:14:07.184 "trsvcid": "4420" 00:14:07.184 }, 00:14:07.184 "peer_address": { 00:14:07.184 "trtype": "TCP", 00:14:07.184 "adrfam": "IPv4", 00:14:07.184 "traddr": "10.0.0.1", 00:14:07.184 "trsvcid": "37658" 00:14:07.184 }, 00:14:07.184 "auth": { 00:14:07.184 "state": "completed", 00:14:07.184 "digest": "sha384", 00:14:07.184 "dhgroup": "null" 00:14:07.184 } 00:14:07.184 } 00:14:07.184 ]' 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.184 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:07.443 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:07.443 01:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:08.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:08.010 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.578 01:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:08.836 00:14:08.836 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:08.836 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:08.836 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:09.095 { 00:14:09.095 "cntlid": 51, 00:14:09.095 "qid": 0, 00:14:09.095 "state": "enabled", 00:14:09.095 "thread": "nvmf_tgt_poll_group_000", 00:14:09.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:09.095 "listen_address": { 00:14:09.095 "trtype": "TCP", 00:14:09.095 "adrfam": "IPv4", 00:14:09.095 "traddr": "10.0.0.3", 00:14:09.095 "trsvcid": "4420" 00:14:09.095 }, 00:14:09.095 "peer_address": { 00:14:09.095 "trtype": "TCP", 00:14:09.095 "adrfam": "IPv4", 00:14:09.095 "traddr": "10.0.0.1", 00:14:09.095 "trsvcid": "37688" 00:14:09.095 }, 00:14:09.095 "auth": { 00:14:09.095 "state": "completed", 00:14:09.095 "digest": "sha384", 00:14:09.095 "dhgroup": "null" 00:14:09.095 } 00:14:09.095 } 00:14:09.095 ]' 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:09.095 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:09.662 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:09.662 01:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:10.228 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.228 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.487 01:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:10.746 00:14:10.746 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:10.746 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:10.746 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:11.005 { 00:14:11.005 "cntlid": 53, 00:14:11.005 "qid": 0, 00:14:11.005 "state": "enabled", 00:14:11.005 "thread": "nvmf_tgt_poll_group_000", 00:14:11.005 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:11.005 "listen_address": { 00:14:11.005 "trtype": "TCP", 00:14:11.005 "adrfam": "IPv4", 00:14:11.005 "traddr": "10.0.0.3", 00:14:11.005 "trsvcid": "4420" 00:14:11.005 }, 00:14:11.005 "peer_address": { 00:14:11.005 "trtype": "TCP", 00:14:11.005 "adrfam": "IPv4", 00:14:11.005 "traddr": "10.0.0.1", 00:14:11.005 "trsvcid": "37724" 00:14:11.005 }, 00:14:11.005 "auth": { 00:14:11.005 "state": "completed", 00:14:11.005 "digest": "sha384", 00:14:11.005 "dhgroup": "null" 00:14:11.005 } 00:14:11.005 } 00:14:11.005 ]' 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:11.005 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:11.263 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:11.263 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:11.263 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:11.263 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:11.263 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:11.521 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:11.521 01:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:12.089 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.089 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.348 01:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:12.606 00:14:12.606 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:12.606 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:12.606 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:12.864 { 00:14:12.864 "cntlid": 55, 00:14:12.864 "qid": 0, 00:14:12.864 "state": "enabled", 00:14:12.864 "thread": "nvmf_tgt_poll_group_000", 00:14:12.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:12.864 "listen_address": { 00:14:12.864 "trtype": "TCP", 00:14:12.864 "adrfam": "IPv4", 00:14:12.864 "traddr": "10.0.0.3", 00:14:12.864 "trsvcid": "4420" 00:14:12.864 }, 00:14:12.864 "peer_address": { 00:14:12.864 "trtype": "TCP", 00:14:12.864 "adrfam": "IPv4", 00:14:12.864 "traddr": "10.0.0.1", 00:14:12.864 "trsvcid": "37748" 00:14:12.864 }, 00:14:12.864 "auth": { 00:14:12.864 "state": "completed", 00:14:12.864 "digest": "sha384", 00:14:12.864 "dhgroup": "null" 00:14:12.864 } 00:14:12.864 } 00:14:12.864 ]' 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:12.864 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:13.121 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:13.122 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:13.122 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:13.122 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:13.122 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:13.122 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:13.380 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:13.380 01:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:13.947 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.206 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:14.773 00:14:14.773 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:14.773 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:14.773 01:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:14.773 { 00:14:14.773 "cntlid": 57, 00:14:14.773 "qid": 0, 00:14:14.773 "state": "enabled", 00:14:14.773 "thread": "nvmf_tgt_poll_group_000", 00:14:14.773 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:14.773 "listen_address": { 00:14:14.773 "trtype": "TCP", 00:14:14.773 "adrfam": "IPv4", 00:14:14.773 "traddr": "10.0.0.3", 00:14:14.773 "trsvcid": "4420" 00:14:14.773 }, 00:14:14.773 "peer_address": { 00:14:14.773 "trtype": "TCP", 00:14:14.773 "adrfam": "IPv4", 00:14:14.773 "traddr": "10.0.0.1", 00:14:14.773 "trsvcid": "37770" 00:14:14.773 }, 00:14:14.773 "auth": { 00:14:14.773 "state": "completed", 00:14:14.773 "digest": "sha384", 00:14:14.773 "dhgroup": "ffdhe2048" 00:14:14.773 } 00:14:14.773 } 00:14:14.773 ]' 00:14:14.773 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:15.032 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:15.290 01:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.858 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:15.858 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:16.116 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.117 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:16.684 00:14:16.684 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:16.684 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:16.684 01:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:16.943 { 00:14:16.943 "cntlid": 59, 00:14:16.943 "qid": 0, 00:14:16.943 "state": "enabled", 00:14:16.943 "thread": "nvmf_tgt_poll_group_000", 00:14:16.943 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:16.943 "listen_address": { 00:14:16.943 "trtype": "TCP", 00:14:16.943 "adrfam": "IPv4", 00:14:16.943 "traddr": "10.0.0.3", 00:14:16.943 "trsvcid": "4420" 00:14:16.943 }, 00:14:16.943 "peer_address": { 00:14:16.943 "trtype": "TCP", 00:14:16.943 "adrfam": "IPv4", 00:14:16.943 "traddr": "10.0.0.1", 00:14:16.943 "trsvcid": "46100" 00:14:16.943 }, 00:14:16.943 "auth": { 00:14:16.943 "state": "completed", 00:14:16.943 "digest": "sha384", 00:14:16.943 "dhgroup": "ffdhe2048" 00:14:16.943 } 00:14:16.943 } 00:14:16.943 ]' 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:16.943 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.202 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:17.202 01:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:17.769 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:17.769 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.337 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:18.337 00:14:18.596 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:18.596 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:18.596 01:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:18.855 { 00:14:18.855 "cntlid": 61, 00:14:18.855 "qid": 0, 00:14:18.855 "state": "enabled", 00:14:18.855 "thread": "nvmf_tgt_poll_group_000", 00:14:18.855 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:18.855 "listen_address": { 00:14:18.855 "trtype": "TCP", 00:14:18.855 "adrfam": "IPv4", 00:14:18.855 "traddr": "10.0.0.3", 00:14:18.855 "trsvcid": "4420" 00:14:18.855 }, 00:14:18.855 "peer_address": { 00:14:18.855 "trtype": "TCP", 00:14:18.855 "adrfam": "IPv4", 00:14:18.855 "traddr": "10.0.0.1", 00:14:18.855 "trsvcid": "46124" 00:14:18.855 }, 00:14:18.855 "auth": { 00:14:18.855 "state": "completed", 00:14:18.855 "digest": "sha384", 00:14:18.855 "dhgroup": "ffdhe2048" 00:14:18.855 } 00:14:18.855 } 00:14:18.855 ]' 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:18.855 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.114 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:19.114 01:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:20.050 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:20.051 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.310 01:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:20.569 00:14:20.830 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:20.830 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:20.830 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:21.092 { 00:14:21.092 "cntlid": 63, 00:14:21.092 "qid": 0, 00:14:21.092 "state": "enabled", 00:14:21.092 "thread": "nvmf_tgt_poll_group_000", 00:14:21.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:21.092 "listen_address": { 00:14:21.092 "trtype": "TCP", 00:14:21.092 "adrfam": "IPv4", 00:14:21.092 "traddr": "10.0.0.3", 00:14:21.092 "trsvcid": "4420" 00:14:21.092 }, 00:14:21.092 "peer_address": { 00:14:21.092 "trtype": "TCP", 00:14:21.092 "adrfam": "IPv4", 00:14:21.092 "traddr": "10.0.0.1", 00:14:21.092 "trsvcid": "46150" 00:14:21.092 }, 00:14:21.092 "auth": { 00:14:21.092 "state": "completed", 00:14:21.092 "digest": "sha384", 00:14:21.092 "dhgroup": "ffdhe2048" 00:14:21.092 } 00:14:21.092 } 00:14:21.092 ]' 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.092 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.351 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:21.351 01:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:21.920 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:21.920 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.489 01:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:22.747 00:14:22.747 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:22.748 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:22.748 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:23.006 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:23.007 { 00:14:23.007 "cntlid": 65, 00:14:23.007 "qid": 0, 00:14:23.007 "state": "enabled", 00:14:23.007 "thread": "nvmf_tgt_poll_group_000", 00:14:23.007 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:23.007 "listen_address": { 00:14:23.007 "trtype": "TCP", 00:14:23.007 "adrfam": "IPv4", 00:14:23.007 "traddr": "10.0.0.3", 00:14:23.007 "trsvcid": "4420" 00:14:23.007 }, 00:14:23.007 "peer_address": { 00:14:23.007 "trtype": "TCP", 00:14:23.007 "adrfam": "IPv4", 00:14:23.007 "traddr": "10.0.0.1", 00:14:23.007 "trsvcid": "46172" 00:14:23.007 }, 00:14:23.007 "auth": { 00:14:23.007 "state": "completed", 00:14:23.007 "digest": "sha384", 00:14:23.007 "dhgroup": "ffdhe3072" 00:14:23.007 } 00:14:23.007 } 00:14:23.007 ]' 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:23.007 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:23.266 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.266 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:23.266 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.266 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.266 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.524 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:23.524 01:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:24.092 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:24.350 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:14:24.350 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.351 01:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:24.609 00:14:24.868 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:24.868 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:24.868 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:25.127 { 00:14:25.127 "cntlid": 67, 00:14:25.127 "qid": 0, 00:14:25.127 "state": "enabled", 00:14:25.127 "thread": "nvmf_tgt_poll_group_000", 00:14:25.127 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:25.127 "listen_address": { 00:14:25.127 "trtype": "TCP", 00:14:25.127 "adrfam": "IPv4", 00:14:25.127 "traddr": "10.0.0.3", 00:14:25.127 "trsvcid": "4420" 00:14:25.127 }, 00:14:25.127 "peer_address": { 00:14:25.127 "trtype": "TCP", 00:14:25.127 "adrfam": "IPv4", 00:14:25.127 "traddr": "10.0.0.1", 00:14:25.127 "trsvcid": "46204" 00:14:25.127 }, 00:14:25.127 "auth": { 00:14:25.127 "state": "completed", 00:14:25.127 "digest": "sha384", 00:14:25.127 "dhgroup": "ffdhe3072" 00:14:25.127 } 00:14:25.127 } 00:14:25.127 ]' 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.127 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.386 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:25.386 01:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:26.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:26.322 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.581 01:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:26.839 00:14:26.839 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:26.839 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:26.839 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:27.097 { 00:14:27.097 "cntlid": 69, 00:14:27.097 "qid": 0, 00:14:27.097 "state": "enabled", 00:14:27.097 "thread": "nvmf_tgt_poll_group_000", 00:14:27.097 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:27.097 "listen_address": { 00:14:27.097 "trtype": "TCP", 00:14:27.097 "adrfam": "IPv4", 00:14:27.097 "traddr": "10.0.0.3", 00:14:27.097 "trsvcid": "4420" 00:14:27.097 }, 00:14:27.097 "peer_address": { 00:14:27.097 "trtype": "TCP", 00:14:27.097 "adrfam": "IPv4", 00:14:27.097 "traddr": "10.0.0.1", 00:14:27.097 "trsvcid": "37446" 00:14:27.097 }, 00:14:27.097 "auth": { 00:14:27.097 "state": "completed", 00:14:27.097 "digest": "sha384", 00:14:27.097 "dhgroup": "ffdhe3072" 00:14:27.097 } 00:14:27.097 } 00:14:27.097 ]' 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:27.097 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:27.356 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:27.356 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:27.356 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:27.614 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:27.614 01:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:28.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:28.182 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:28.441 01:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:29.007 00:14:29.007 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:29.007 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:29.007 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:29.266 { 00:14:29.266 "cntlid": 71, 00:14:29.266 "qid": 0, 00:14:29.266 "state": "enabled", 00:14:29.266 "thread": "nvmf_tgt_poll_group_000", 00:14:29.266 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:29.266 "listen_address": { 00:14:29.266 "trtype": "TCP", 00:14:29.266 "adrfam": "IPv4", 00:14:29.266 "traddr": "10.0.0.3", 00:14:29.266 "trsvcid": "4420" 00:14:29.266 }, 00:14:29.266 "peer_address": { 00:14:29.266 "trtype": "TCP", 00:14:29.266 "adrfam": "IPv4", 00:14:29.266 "traddr": "10.0.0.1", 00:14:29.266 "trsvcid": "37466" 00:14:29.266 }, 00:14:29.266 "auth": { 00:14:29.266 "state": "completed", 00:14:29.266 "digest": "sha384", 00:14:29.266 "dhgroup": "ffdhe3072" 00:14:29.266 } 00:14:29.266 } 00:14:29.266 ]' 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:29.266 01:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:29.833 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:29.833 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:30.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:30.401 01:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:30.659 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:14:30.659 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:30.659 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:30.659 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:30.659 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:30.660 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:31.226 00:14:31.226 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:31.226 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:31.226 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:31.488 { 00:14:31.488 "cntlid": 73, 00:14:31.488 "qid": 0, 00:14:31.488 "state": "enabled", 00:14:31.488 "thread": "nvmf_tgt_poll_group_000", 00:14:31.488 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:31.488 "listen_address": { 00:14:31.488 "trtype": "TCP", 00:14:31.488 "adrfam": "IPv4", 00:14:31.488 "traddr": "10.0.0.3", 00:14:31.488 "trsvcid": "4420" 00:14:31.488 }, 00:14:31.488 "peer_address": { 00:14:31.488 "trtype": "TCP", 00:14:31.488 "adrfam": "IPv4", 00:14:31.488 "traddr": "10.0.0.1", 00:14:31.488 "trsvcid": "37496" 00:14:31.488 }, 00:14:31.488 "auth": { 00:14:31.488 "state": "completed", 00:14:31.488 "digest": "sha384", 00:14:31.488 "dhgroup": "ffdhe4096" 00:14:31.488 } 00:14:31.488 } 00:14:31.488 ]' 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:31.488 01:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.055 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:32.055 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:32.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:32.621 01:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:32.880 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:33.138 00:14:33.138 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:33.138 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:33.138 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:33.397 { 00:14:33.397 "cntlid": 75, 00:14:33.397 "qid": 0, 00:14:33.397 "state": "enabled", 00:14:33.397 "thread": "nvmf_tgt_poll_group_000", 00:14:33.397 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:33.397 "listen_address": { 00:14:33.397 "trtype": "TCP", 00:14:33.397 "adrfam": "IPv4", 00:14:33.397 "traddr": "10.0.0.3", 00:14:33.397 "trsvcid": "4420" 00:14:33.397 }, 00:14:33.397 "peer_address": { 00:14:33.397 "trtype": "TCP", 00:14:33.397 "adrfam": "IPv4", 00:14:33.397 "traddr": "10.0.0.1", 00:14:33.397 "trsvcid": "37516" 00:14:33.397 }, 00:14:33.397 "auth": { 00:14:33.397 "state": "completed", 00:14:33.397 "digest": "sha384", 00:14:33.397 "dhgroup": "ffdhe4096" 00:14:33.397 } 00:14:33.397 } 00:14:33.397 ]' 00:14:33.397 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:33.656 01:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:33.914 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:33.914 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:34.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:34.480 01:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:34.739 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:14:34.739 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:34.739 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:34.739 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:34.739 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:34.740 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:35.308 00:14:35.308 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:35.308 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:35.308 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:35.567 { 00:14:35.567 "cntlid": 77, 00:14:35.567 "qid": 0, 00:14:35.567 "state": "enabled", 00:14:35.567 "thread": "nvmf_tgt_poll_group_000", 00:14:35.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:35.567 "listen_address": { 00:14:35.567 "trtype": "TCP", 00:14:35.567 "adrfam": "IPv4", 00:14:35.567 "traddr": "10.0.0.3", 00:14:35.567 "trsvcid": "4420" 00:14:35.567 }, 00:14:35.567 "peer_address": { 00:14:35.567 "trtype": "TCP", 00:14:35.567 "adrfam": "IPv4", 00:14:35.567 "traddr": "10.0.0.1", 00:14:35.567 "trsvcid": "55898" 00:14:35.567 }, 00:14:35.567 "auth": { 00:14:35.567 "state": "completed", 00:14:35.567 "digest": "sha384", 00:14:35.567 "dhgroup": "ffdhe4096" 00:14:35.567 } 00:14:35.567 } 00:14:35.567 ]' 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.567 01:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.824 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:35.824 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:36.390 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.390 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.390 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:36.390 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.390 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.649 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.649 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:36.649 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.649 01:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:36.649 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:37.214 00:14:37.214 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:37.214 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.214 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:37.472 { 00:14:37.472 "cntlid": 79, 00:14:37.472 "qid": 0, 00:14:37.472 "state": "enabled", 00:14:37.472 "thread": "nvmf_tgt_poll_group_000", 00:14:37.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:37.472 "listen_address": { 00:14:37.472 "trtype": "TCP", 00:14:37.472 "adrfam": "IPv4", 00:14:37.472 "traddr": "10.0.0.3", 00:14:37.472 "trsvcid": "4420" 00:14:37.472 }, 00:14:37.472 "peer_address": { 00:14:37.472 "trtype": "TCP", 00:14:37.472 "adrfam": "IPv4", 00:14:37.472 "traddr": "10.0.0.1", 00:14:37.472 "trsvcid": "55934" 00:14:37.472 }, 00:14:37.472 "auth": { 00:14:37.472 "state": "completed", 00:14:37.472 "digest": "sha384", 00:14:37.472 "dhgroup": "ffdhe4096" 00:14:37.472 } 00:14:37.472 } 00:14:37.472 ]' 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:37.472 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:37.730 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:37.730 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:37.730 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.730 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.730 01:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.988 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:37.988 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.555 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:38.555 01:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:38.812 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:14:38.812 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:38.812 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:38.812 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:38.812 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:38.813 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:39.378 00:14:39.378 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:39.378 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.378 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:39.636 { 00:14:39.636 "cntlid": 81, 00:14:39.636 "qid": 0, 00:14:39.636 "state": "enabled", 00:14:39.636 "thread": "nvmf_tgt_poll_group_000", 00:14:39.636 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:39.636 "listen_address": { 00:14:39.636 "trtype": "TCP", 00:14:39.636 "adrfam": "IPv4", 00:14:39.636 "traddr": "10.0.0.3", 00:14:39.636 "trsvcid": "4420" 00:14:39.636 }, 00:14:39.636 "peer_address": { 00:14:39.636 "trtype": "TCP", 00:14:39.636 "adrfam": "IPv4", 00:14:39.636 "traddr": "10.0.0.1", 00:14:39.636 "trsvcid": "55960" 00:14:39.636 }, 00:14:39.636 "auth": { 00:14:39.636 "state": "completed", 00:14:39.636 "digest": "sha384", 00:14:39.636 "dhgroup": "ffdhe6144" 00:14:39.636 } 00:14:39.636 } 00:14:39.636 ]' 00:14:39.636 01:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:39.636 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:39.636 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:39.636 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.636 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:39.894 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.894 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.894 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.152 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:40.152 01:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:40.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:40.719 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:40.977 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:41.541 00:14:41.541 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:41.541 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:41.542 01:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:41.799 { 00:14:41.799 "cntlid": 83, 00:14:41.799 "qid": 0, 00:14:41.799 "state": "enabled", 00:14:41.799 "thread": "nvmf_tgt_poll_group_000", 00:14:41.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:41.799 "listen_address": { 00:14:41.799 "trtype": "TCP", 00:14:41.799 "adrfam": "IPv4", 00:14:41.799 "traddr": "10.0.0.3", 00:14:41.799 "trsvcid": "4420" 00:14:41.799 }, 00:14:41.799 "peer_address": { 00:14:41.799 "trtype": "TCP", 00:14:41.799 "adrfam": "IPv4", 00:14:41.799 "traddr": "10.0.0.1", 00:14:41.799 "trsvcid": "56000" 00:14:41.799 }, 00:14:41.799 "auth": { 00:14:41.799 "state": "completed", 00:14:41.799 "digest": "sha384", 00:14:41.799 "dhgroup": "ffdhe6144" 00:14:41.799 } 00:14:41.799 } 00:14:41.799 ]' 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:41.799 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:41.800 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:42.057 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.057 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.057 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.316 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:42.316 01:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:42.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:42.883 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:43.141 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:14:43.141 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:43.141 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.142 01:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:43.710 00:14:43.710 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:43.710 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:43.710 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:43.974 { 00:14:43.974 "cntlid": 85, 00:14:43.974 "qid": 0, 00:14:43.974 "state": "enabled", 00:14:43.974 "thread": "nvmf_tgt_poll_group_000", 00:14:43.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:43.974 "listen_address": { 00:14:43.974 "trtype": "TCP", 00:14:43.974 "adrfam": "IPv4", 00:14:43.974 "traddr": "10.0.0.3", 00:14:43.974 "trsvcid": "4420" 00:14:43.974 }, 00:14:43.974 "peer_address": { 00:14:43.974 "trtype": "TCP", 00:14:43.974 "adrfam": "IPv4", 00:14:43.974 "traddr": "10.0.0.1", 00:14:43.974 "trsvcid": "56036" 00:14:43.974 }, 00:14:43.974 "auth": { 00:14:43.974 "state": "completed", 00:14:43.974 "digest": "sha384", 00:14:43.974 "dhgroup": "ffdhe6144" 00:14:43.974 } 00:14:43.974 } 00:14:43.974 ]' 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:43.974 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:44.232 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:44.232 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:44.232 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:44.232 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:44.232 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:44.490 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:44.490 01:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:45.057 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.057 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.316 01:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:45.884 00:14:45.884 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:45.884 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:45.884 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:46.143 { 00:14:46.143 "cntlid": 87, 00:14:46.143 "qid": 0, 00:14:46.143 "state": "enabled", 00:14:46.143 "thread": "nvmf_tgt_poll_group_000", 00:14:46.143 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:46.143 "listen_address": { 00:14:46.143 "trtype": "TCP", 00:14:46.143 "adrfam": "IPv4", 00:14:46.143 "traddr": "10.0.0.3", 00:14:46.143 "trsvcid": "4420" 00:14:46.143 }, 00:14:46.143 "peer_address": { 00:14:46.143 "trtype": "TCP", 00:14:46.143 "adrfam": "IPv4", 00:14:46.143 "traddr": "10.0.0.1", 00:14:46.143 "trsvcid": "38766" 00:14:46.143 }, 00:14:46.143 "auth": { 00:14:46.143 "state": "completed", 00:14:46.143 "digest": "sha384", 00:14:46.143 "dhgroup": "ffdhe6144" 00:14:46.143 } 00:14:46.143 } 00:14:46.143 ]' 00:14:46.143 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:46.402 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:46.660 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:46.660 01:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:47.596 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:47.596 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:47.596 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:47.596 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.597 01:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:48.533 00:14:48.533 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:48.533 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.533 01:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:48.792 { 00:14:48.792 "cntlid": 89, 00:14:48.792 "qid": 0, 00:14:48.792 "state": "enabled", 00:14:48.792 "thread": "nvmf_tgt_poll_group_000", 00:14:48.792 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:48.792 "listen_address": { 00:14:48.792 "trtype": "TCP", 00:14:48.792 "adrfam": "IPv4", 00:14:48.792 "traddr": "10.0.0.3", 00:14:48.792 "trsvcid": "4420" 00:14:48.792 }, 00:14:48.792 "peer_address": { 00:14:48.792 "trtype": "TCP", 00:14:48.792 "adrfam": "IPv4", 00:14:48.792 "traddr": "10.0.0.1", 00:14:48.792 "trsvcid": "38800" 00:14:48.792 }, 00:14:48.792 "auth": { 00:14:48.792 "state": "completed", 00:14:48.792 "digest": "sha384", 00:14:48.792 "dhgroup": "ffdhe8192" 00:14:48.792 } 00:14:48.792 } 00:14:48.792 ]' 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.792 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.358 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:49.358 01:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.926 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:49.926 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:50.185 01:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:51.122 00:14:51.122 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:51.122 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:51.122 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:51.381 { 00:14:51.381 "cntlid": 91, 00:14:51.381 "qid": 0, 00:14:51.381 "state": "enabled", 00:14:51.381 "thread": "nvmf_tgt_poll_group_000", 00:14:51.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:51.381 "listen_address": { 00:14:51.381 "trtype": "TCP", 00:14:51.381 "adrfam": "IPv4", 00:14:51.381 "traddr": "10.0.0.3", 00:14:51.381 "trsvcid": "4420" 00:14:51.381 }, 00:14:51.381 "peer_address": { 00:14:51.381 "trtype": "TCP", 00:14:51.381 "adrfam": "IPv4", 00:14:51.381 "traddr": "10.0.0.1", 00:14:51.381 "trsvcid": "38830" 00:14:51.381 }, 00:14:51.381 "auth": { 00:14:51.381 "state": "completed", 00:14:51.381 "digest": "sha384", 00:14:51.381 "dhgroup": "ffdhe8192" 00:14:51.381 } 00:14:51.381 } 00:14:51.381 ]' 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:51.381 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.640 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.640 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.640 01:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.899 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:51.899 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:52.468 01:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.037 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:53.605 00:14:53.605 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.605 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.605 01:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:53.864 { 00:14:53.864 "cntlid": 93, 00:14:53.864 "qid": 0, 00:14:53.864 "state": "enabled", 00:14:53.864 "thread": "nvmf_tgt_poll_group_000", 00:14:53.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:53.864 "listen_address": { 00:14:53.864 "trtype": "TCP", 00:14:53.864 "adrfam": "IPv4", 00:14:53.864 "traddr": "10.0.0.3", 00:14:53.864 "trsvcid": "4420" 00:14:53.864 }, 00:14:53.864 "peer_address": { 00:14:53.864 "trtype": "TCP", 00:14:53.864 "adrfam": "IPv4", 00:14:53.864 "traddr": "10.0.0.1", 00:14:53.864 "trsvcid": "38864" 00:14:53.864 }, 00:14:53.864 "auth": { 00:14:53.864 "state": "completed", 00:14:53.864 "digest": "sha384", 00:14:53.864 "dhgroup": "ffdhe8192" 00:14:53.864 } 00:14:53.864 } 00:14:53.864 ]' 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.864 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.124 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.124 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.124 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.383 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:54.383 01:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:55.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.320 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.579 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.579 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:55.579 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:55.579 01:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:56.148 00:14:56.148 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:56.148 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:56.148 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:56.407 { 00:14:56.407 "cntlid": 95, 00:14:56.407 "qid": 0, 00:14:56.407 "state": "enabled", 00:14:56.407 "thread": "nvmf_tgt_poll_group_000", 00:14:56.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:56.407 "listen_address": { 00:14:56.407 "trtype": "TCP", 00:14:56.407 "adrfam": "IPv4", 00:14:56.407 "traddr": "10.0.0.3", 00:14:56.407 "trsvcid": "4420" 00:14:56.407 }, 00:14:56.407 "peer_address": { 00:14:56.407 "trtype": "TCP", 00:14:56.407 "adrfam": "IPv4", 00:14:56.407 "traddr": "10.0.0.1", 00:14:56.407 "trsvcid": "40952" 00:14:56.407 }, 00:14:56.407 "auth": { 00:14:56.407 "state": "completed", 00:14:56.407 "digest": "sha384", 00:14:56.407 "dhgroup": "ffdhe8192" 00:14:56.407 } 00:14:56.407 } 00:14:56.407 ]' 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:56.407 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:56.665 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:56.665 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.665 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.665 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.665 01:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.938 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:56.938 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:14:57.533 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.533 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.533 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:57.533 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.533 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.792 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.792 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:57.792 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:57.792 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:57.792 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:57.792 01:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.051 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.052 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:58.311 00:14:58.311 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.311 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.311 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.569 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.569 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.569 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.569 01:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.569 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.569 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.569 { 00:14:58.569 "cntlid": 97, 00:14:58.569 "qid": 0, 00:14:58.569 "state": "enabled", 00:14:58.569 "thread": "nvmf_tgt_poll_group_000", 00:14:58.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:14:58.569 "listen_address": { 00:14:58.569 "trtype": "TCP", 00:14:58.569 "adrfam": "IPv4", 00:14:58.569 "traddr": "10.0.0.3", 00:14:58.569 "trsvcid": "4420" 00:14:58.569 }, 00:14:58.569 "peer_address": { 00:14:58.569 "trtype": "TCP", 00:14:58.569 "adrfam": "IPv4", 00:14:58.569 "traddr": "10.0.0.1", 00:14:58.569 "trsvcid": "40972" 00:14:58.569 }, 00:14:58.569 "auth": { 00:14:58.569 "state": "completed", 00:14:58.569 "digest": "sha512", 00:14:58.569 "dhgroup": "null" 00:14:58.569 } 00:14:58.569 } 00:14:58.569 ]' 00:14:58.569 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.828 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.087 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:59.087 01:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:14:59.656 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.915 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:14:59.915 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.174 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:00.433 00:15:00.433 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.433 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.433 01:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.691 { 00:15:00.691 "cntlid": 99, 00:15:00.691 "qid": 0, 00:15:00.691 "state": "enabled", 00:15:00.691 "thread": "nvmf_tgt_poll_group_000", 00:15:00.691 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:00.691 "listen_address": { 00:15:00.691 "trtype": "TCP", 00:15:00.691 "adrfam": "IPv4", 00:15:00.691 "traddr": "10.0.0.3", 00:15:00.691 "trsvcid": "4420" 00:15:00.691 }, 00:15:00.691 "peer_address": { 00:15:00.691 "trtype": "TCP", 00:15:00.691 "adrfam": "IPv4", 00:15:00.691 "traddr": "10.0.0.1", 00:15:00.691 "trsvcid": "41004" 00:15:00.691 }, 00:15:00.691 "auth": { 00:15:00.691 "state": "completed", 00:15:00.691 "digest": "sha512", 00:15:00.691 "dhgroup": "null" 00:15:00.691 } 00:15:00.691 } 00:15:00.691 ]' 00:15:00.691 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.948 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:00.949 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.949 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:00.949 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.949 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.949 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.949 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:01.206 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:01.206 01:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:02.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.141 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.400 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:02.659 00:15:02.659 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:02.659 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.659 01:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.917 { 00:15:02.917 "cntlid": 101, 00:15:02.917 "qid": 0, 00:15:02.917 "state": "enabled", 00:15:02.917 "thread": "nvmf_tgt_poll_group_000", 00:15:02.917 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:02.917 "listen_address": { 00:15:02.917 "trtype": "TCP", 00:15:02.917 "adrfam": "IPv4", 00:15:02.917 "traddr": "10.0.0.3", 00:15:02.917 "trsvcid": "4420" 00:15:02.917 }, 00:15:02.917 "peer_address": { 00:15:02.917 "trtype": "TCP", 00:15:02.917 "adrfam": "IPv4", 00:15:02.917 "traddr": "10.0.0.1", 00:15:02.917 "trsvcid": "41032" 00:15:02.917 }, 00:15:02.917 "auth": { 00:15:02.917 "state": "completed", 00:15:02.917 "digest": "sha512", 00:15:02.917 "dhgroup": "null" 00:15:02.917 } 00:15:02.917 } 00:15:02.917 ]' 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:02.917 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:03.176 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:03.176 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:03.176 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:03.434 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:03.434 01:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.002 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.002 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.568 01:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.826 00:15:04.826 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:04.826 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:04.826 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.085 { 00:15:05.085 "cntlid": 103, 00:15:05.085 "qid": 0, 00:15:05.085 "state": "enabled", 00:15:05.085 "thread": "nvmf_tgt_poll_group_000", 00:15:05.085 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:05.085 "listen_address": { 00:15:05.085 "trtype": "TCP", 00:15:05.085 "adrfam": "IPv4", 00:15:05.085 "traddr": "10.0.0.3", 00:15:05.085 "trsvcid": "4420" 00:15:05.085 }, 00:15:05.085 "peer_address": { 00:15:05.085 "trtype": "TCP", 00:15:05.085 "adrfam": "IPv4", 00:15:05.085 "traddr": "10.0.0.1", 00:15:05.085 "trsvcid": "41062" 00:15:05.085 }, 00:15:05.085 "auth": { 00:15:05.085 "state": "completed", 00:15:05.085 "digest": "sha512", 00:15:05.085 "dhgroup": "null" 00:15:05.085 } 00:15:05.085 } 00:15:05.085 ]' 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:05.085 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.344 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.344 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.344 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.603 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:05.603 01:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:06.172 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.741 01:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:07.000 00:15:07.000 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:07.000 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.000 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.260 { 00:15:07.260 "cntlid": 105, 00:15:07.260 "qid": 0, 00:15:07.260 "state": "enabled", 00:15:07.260 "thread": "nvmf_tgt_poll_group_000", 00:15:07.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:07.260 "listen_address": { 00:15:07.260 "trtype": "TCP", 00:15:07.260 "adrfam": "IPv4", 00:15:07.260 "traddr": "10.0.0.3", 00:15:07.260 "trsvcid": "4420" 00:15:07.260 }, 00:15:07.260 "peer_address": { 00:15:07.260 "trtype": "TCP", 00:15:07.260 "adrfam": "IPv4", 00:15:07.260 "traddr": "10.0.0.1", 00:15:07.260 "trsvcid": "59204" 00:15:07.260 }, 00:15:07.260 "auth": { 00:15:07.260 "state": "completed", 00:15:07.260 "digest": "sha512", 00:15:07.260 "dhgroup": "ffdhe2048" 00:15:07.260 } 00:15:07.260 } 00:15:07.260 ]' 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:07.260 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.518 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.518 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.518 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.518 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.518 01:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.777 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:07.777 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.345 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:08.345 01:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.912 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:09.171 00:15:09.171 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:09.171 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:09.171 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.429 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.429 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.429 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.429 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.429 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.430 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:09.430 { 00:15:09.430 "cntlid": 107, 00:15:09.430 "qid": 0, 00:15:09.430 "state": "enabled", 00:15:09.430 "thread": "nvmf_tgt_poll_group_000", 00:15:09.430 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:09.430 "listen_address": { 00:15:09.430 "trtype": "TCP", 00:15:09.430 "adrfam": "IPv4", 00:15:09.430 "traddr": "10.0.0.3", 00:15:09.430 "trsvcid": "4420" 00:15:09.430 }, 00:15:09.430 "peer_address": { 00:15:09.430 "trtype": "TCP", 00:15:09.430 "adrfam": "IPv4", 00:15:09.430 "traddr": "10.0.0.1", 00:15:09.430 "trsvcid": "59230" 00:15:09.430 }, 00:15:09.430 "auth": { 00:15:09.430 "state": "completed", 00:15:09.430 "digest": "sha512", 00:15:09.430 "dhgroup": "ffdhe2048" 00:15:09.430 } 00:15:09.430 } 00:15:09.430 ]' 00:15:09.430 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:09.430 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:09.430 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:09.690 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.690 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:09.690 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.691 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.691 01:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.949 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:09.949 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.517 01:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.776 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:11.345 00:15:11.345 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.345 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.345 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.604 { 00:15:11.604 "cntlid": 109, 00:15:11.604 "qid": 0, 00:15:11.604 "state": "enabled", 00:15:11.604 "thread": "nvmf_tgt_poll_group_000", 00:15:11.604 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:11.604 "listen_address": { 00:15:11.604 "trtype": "TCP", 00:15:11.604 "adrfam": "IPv4", 00:15:11.604 "traddr": "10.0.0.3", 00:15:11.604 "trsvcid": "4420" 00:15:11.604 }, 00:15:11.604 "peer_address": { 00:15:11.604 "trtype": "TCP", 00:15:11.604 "adrfam": "IPv4", 00:15:11.604 "traddr": "10.0.0.1", 00:15:11.604 "trsvcid": "59250" 00:15:11.604 }, 00:15:11.604 "auth": { 00:15:11.604 "state": "completed", 00:15:11.604 "digest": "sha512", 00:15:11.604 "dhgroup": "ffdhe2048" 00:15:11.604 } 00:15:11.604 } 00:15:11.604 ]' 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:11.604 01:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:11.604 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.604 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:11.604 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.604 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.604 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.173 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:12.173 01:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:12.742 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.002 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:13.572 00:15:13.572 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.572 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.572 01:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.831 { 00:15:13.831 "cntlid": 111, 00:15:13.831 "qid": 0, 00:15:13.831 "state": "enabled", 00:15:13.831 "thread": "nvmf_tgt_poll_group_000", 00:15:13.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:13.831 "listen_address": { 00:15:13.831 "trtype": "TCP", 00:15:13.831 "adrfam": "IPv4", 00:15:13.831 "traddr": "10.0.0.3", 00:15:13.831 "trsvcid": "4420" 00:15:13.831 }, 00:15:13.831 "peer_address": { 00:15:13.831 "trtype": "TCP", 00:15:13.831 "adrfam": "IPv4", 00:15:13.831 "traddr": "10.0.0.1", 00:15:13.831 "trsvcid": "59294" 00:15:13.831 }, 00:15:13.831 "auth": { 00:15:13.831 "state": "completed", 00:15:13.831 "digest": "sha512", 00:15:13.831 "dhgroup": "ffdhe2048" 00:15:13.831 } 00:15:13.831 } 00:15:13.831 ]' 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.831 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.090 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:14.090 01:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:15.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:15.027 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.028 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:15.596 00:15:15.596 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.596 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.596 01:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.596 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.596 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.596 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.596 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.597 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.597 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.597 { 00:15:15.597 "cntlid": 113, 00:15:15.597 "qid": 0, 00:15:15.597 "state": "enabled", 00:15:15.597 "thread": "nvmf_tgt_poll_group_000", 00:15:15.597 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:15.597 "listen_address": { 00:15:15.597 "trtype": "TCP", 00:15:15.597 "adrfam": "IPv4", 00:15:15.597 "traddr": "10.0.0.3", 00:15:15.597 "trsvcid": "4420" 00:15:15.597 }, 00:15:15.597 "peer_address": { 00:15:15.597 "trtype": "TCP", 00:15:15.597 "adrfam": "IPv4", 00:15:15.597 "traddr": "10.0.0.1", 00:15:15.597 "trsvcid": "38196" 00:15:15.597 }, 00:15:15.597 "auth": { 00:15:15.597 "state": "completed", 00:15:15.597 "digest": "sha512", 00:15:15.597 "dhgroup": "ffdhe3072" 00:15:15.597 } 00:15:15.597 } 00:15:15.597 ]' 00:15:15.597 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.856 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.115 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:16.115 01:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.051 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:17.619 00:15:17.619 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:17.619 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.619 01:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.878 { 00:15:17.878 "cntlid": 115, 00:15:17.878 "qid": 0, 00:15:17.878 "state": "enabled", 00:15:17.878 "thread": "nvmf_tgt_poll_group_000", 00:15:17.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:17.878 "listen_address": { 00:15:17.878 "trtype": "TCP", 00:15:17.878 "adrfam": "IPv4", 00:15:17.878 "traddr": "10.0.0.3", 00:15:17.878 "trsvcid": "4420" 00:15:17.878 }, 00:15:17.878 "peer_address": { 00:15:17.878 "trtype": "TCP", 00:15:17.878 "adrfam": "IPv4", 00:15:17.878 "traddr": "10.0.0.1", 00:15:17.878 "trsvcid": "38226" 00:15:17.878 }, 00:15:17.878 "auth": { 00:15:17.878 "state": "completed", 00:15:17.878 "digest": "sha512", 00:15:17.878 "dhgroup": "ffdhe3072" 00:15:17.878 } 00:15:17.878 } 00:15:17.878 ]' 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.878 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.879 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.446 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:18.446 01:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.014 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.014 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.015 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.274 01:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:19.842 00:15:19.842 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:19.842 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:19.842 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.102 { 00:15:20.102 "cntlid": 117, 00:15:20.102 "qid": 0, 00:15:20.102 "state": "enabled", 00:15:20.102 "thread": "nvmf_tgt_poll_group_000", 00:15:20.102 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:20.102 "listen_address": { 00:15:20.102 "trtype": "TCP", 00:15:20.102 "adrfam": "IPv4", 00:15:20.102 "traddr": "10.0.0.3", 00:15:20.102 "trsvcid": "4420" 00:15:20.102 }, 00:15:20.102 "peer_address": { 00:15:20.102 "trtype": "TCP", 00:15:20.102 "adrfam": "IPv4", 00:15:20.102 "traddr": "10.0.0.1", 00:15:20.102 "trsvcid": "38258" 00:15:20.102 }, 00:15:20.102 "auth": { 00:15:20.102 "state": "completed", 00:15:20.102 "digest": "sha512", 00:15:20.102 "dhgroup": "ffdhe3072" 00:15:20.102 } 00:15:20.102 } 00:15:20.102 ]' 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.102 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.361 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:20.361 01:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.305 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.564 01:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:21.823 00:15:21.823 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:21.823 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:21.823 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.088 { 00:15:22.088 "cntlid": 119, 00:15:22.088 "qid": 0, 00:15:22.088 "state": "enabled", 00:15:22.088 "thread": "nvmf_tgt_poll_group_000", 00:15:22.088 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:22.088 "listen_address": { 00:15:22.088 "trtype": "TCP", 00:15:22.088 "adrfam": "IPv4", 00:15:22.088 "traddr": "10.0.0.3", 00:15:22.088 "trsvcid": "4420" 00:15:22.088 }, 00:15:22.088 "peer_address": { 00:15:22.088 "trtype": "TCP", 00:15:22.088 "adrfam": "IPv4", 00:15:22.088 "traddr": "10.0.0.1", 00:15:22.088 "trsvcid": "38300" 00:15:22.088 }, 00:15:22.088 "auth": { 00:15:22.088 "state": "completed", 00:15:22.088 "digest": "sha512", 00:15:22.088 "dhgroup": "ffdhe3072" 00:15:22.088 } 00:15:22.088 } 00:15:22.088 ]' 00:15:22.088 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.363 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.622 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:22.622 01:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:23.558 01:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:23.817 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:24.076 00:15:24.076 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.076 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.076 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.642 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.642 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.642 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.642 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.643 { 00:15:24.643 "cntlid": 121, 00:15:24.643 "qid": 0, 00:15:24.643 "state": "enabled", 00:15:24.643 "thread": "nvmf_tgt_poll_group_000", 00:15:24.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:24.643 "listen_address": { 00:15:24.643 "trtype": "TCP", 00:15:24.643 "adrfam": "IPv4", 00:15:24.643 "traddr": "10.0.0.3", 00:15:24.643 "trsvcid": "4420" 00:15:24.643 }, 00:15:24.643 "peer_address": { 00:15:24.643 "trtype": "TCP", 00:15:24.643 "adrfam": "IPv4", 00:15:24.643 "traddr": "10.0.0.1", 00:15:24.643 "trsvcid": "38338" 00:15:24.643 }, 00:15:24.643 "auth": { 00:15:24.643 "state": "completed", 00:15:24.643 "digest": "sha512", 00:15:24.643 "dhgroup": "ffdhe4096" 00:15:24.643 } 00:15:24.643 } 00:15:24.643 ]' 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:24.643 01:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.643 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.643 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.643 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.902 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:24.902 01:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:25.837 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.097 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:26.664 00:15:26.664 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:26.664 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.664 01:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.923 { 00:15:26.923 "cntlid": 123, 00:15:26.923 "qid": 0, 00:15:26.923 "state": "enabled", 00:15:26.923 "thread": "nvmf_tgt_poll_group_000", 00:15:26.923 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:26.923 "listen_address": { 00:15:26.923 "trtype": "TCP", 00:15:26.923 "adrfam": "IPv4", 00:15:26.923 "traddr": "10.0.0.3", 00:15:26.923 "trsvcid": "4420" 00:15:26.923 }, 00:15:26.923 "peer_address": { 00:15:26.923 "trtype": "TCP", 00:15:26.923 "adrfam": "IPv4", 00:15:26.923 "traddr": "10.0.0.1", 00:15:26.923 "trsvcid": "54968" 00:15:26.923 }, 00:15:26.923 "auth": { 00:15:26.923 "state": "completed", 00:15:26.923 "digest": "sha512", 00:15:26.923 "dhgroup": "ffdhe4096" 00:15:26.923 } 00:15:26.923 } 00:15:26.923 ]' 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:26.923 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.181 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.181 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.181 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.439 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:27.439 01:35:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.006 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:28.573 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.574 01:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:28.833 00:15:28.833 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.833 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.833 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.092 { 00:15:29.092 "cntlid": 125, 00:15:29.092 "qid": 0, 00:15:29.092 "state": "enabled", 00:15:29.092 "thread": "nvmf_tgt_poll_group_000", 00:15:29.092 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:29.092 "listen_address": { 00:15:29.092 "trtype": "TCP", 00:15:29.092 "adrfam": "IPv4", 00:15:29.092 "traddr": "10.0.0.3", 00:15:29.092 "trsvcid": "4420" 00:15:29.092 }, 00:15:29.092 "peer_address": { 00:15:29.092 "trtype": "TCP", 00:15:29.092 "adrfam": "IPv4", 00:15:29.092 "traddr": "10.0.0.1", 00:15:29.092 "trsvcid": "54998" 00:15:29.092 }, 00:15:29.092 "auth": { 00:15:29.092 "state": "completed", 00:15:29.092 "digest": "sha512", 00:15:29.092 "dhgroup": "ffdhe4096" 00:15:29.092 } 00:15:29.092 } 00:15:29.092 ]' 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.092 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:29.351 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.351 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.351 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.351 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.351 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.351 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.610 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:29.610 01:35:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.177 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.177 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:30.745 01:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.002 00:15:31.002 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:31.002 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:31.002 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:31.260 { 00:15:31.260 "cntlid": 127, 00:15:31.260 "qid": 0, 00:15:31.260 "state": "enabled", 00:15:31.260 "thread": "nvmf_tgt_poll_group_000", 00:15:31.260 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:31.260 "listen_address": { 00:15:31.260 "trtype": "TCP", 00:15:31.260 "adrfam": "IPv4", 00:15:31.260 "traddr": "10.0.0.3", 00:15:31.260 "trsvcid": "4420" 00:15:31.260 }, 00:15:31.260 "peer_address": { 00:15:31.260 "trtype": "TCP", 00:15:31.260 "adrfam": "IPv4", 00:15:31.260 "traddr": "10.0.0.1", 00:15:31.260 "trsvcid": "55022" 00:15:31.260 }, 00:15:31.260 "auth": { 00:15:31.260 "state": "completed", 00:15:31.260 "digest": "sha512", 00:15:31.260 "dhgroup": "ffdhe4096" 00:15:31.260 } 00:15:31.260 } 00:15:31.260 ]' 00:15:31.260 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:31.519 01:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.779 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:31.779 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:32.716 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:32.716 01:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.975 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.543 00:15:33.543 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.543 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.543 01:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.802 { 00:15:33.802 "cntlid": 129, 00:15:33.802 "qid": 0, 00:15:33.802 "state": "enabled", 00:15:33.802 "thread": "nvmf_tgt_poll_group_000", 00:15:33.802 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:33.802 "listen_address": { 00:15:33.802 "trtype": "TCP", 00:15:33.802 "adrfam": "IPv4", 00:15:33.802 "traddr": "10.0.0.3", 00:15:33.802 "trsvcid": "4420" 00:15:33.802 }, 00:15:33.802 "peer_address": { 00:15:33.802 "trtype": "TCP", 00:15:33.802 "adrfam": "IPv4", 00:15:33.802 "traddr": "10.0.0.1", 00:15:33.802 "trsvcid": "55056" 00:15:33.802 }, 00:15:33.802 "auth": { 00:15:33.802 "state": "completed", 00:15:33.802 "digest": "sha512", 00:15:33.802 "dhgroup": "ffdhe6144" 00:15:33.802 } 00:15:33.802 } 00:15:33.802 ]' 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:33.802 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:33.803 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:33.803 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:33.803 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:33.803 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:33.803 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.370 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:34.370 01:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:34.937 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.196 01:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.778 00:15:35.778 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.778 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.778 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:36.037 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:36.037 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:36.037 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.037 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.037 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.037 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:36.037 { 00:15:36.037 "cntlid": 131, 00:15:36.037 "qid": 0, 00:15:36.037 "state": "enabled", 00:15:36.037 "thread": "nvmf_tgt_poll_group_000", 00:15:36.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:36.037 "listen_address": { 00:15:36.037 "trtype": "TCP", 00:15:36.038 "adrfam": "IPv4", 00:15:36.038 "traddr": "10.0.0.3", 00:15:36.038 "trsvcid": "4420" 00:15:36.038 }, 00:15:36.038 "peer_address": { 00:15:36.038 "trtype": "TCP", 00:15:36.038 "adrfam": "IPv4", 00:15:36.038 "traddr": "10.0.0.1", 00:15:36.038 "trsvcid": "40214" 00:15:36.038 }, 00:15:36.038 "auth": { 00:15:36.038 "state": "completed", 00:15:36.038 "digest": "sha512", 00:15:36.038 "dhgroup": "ffdhe6144" 00:15:36.038 } 00:15:36.038 } 00:15:36.038 ]' 00:15:36.038 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:36.038 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:36.038 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:36.296 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:36.296 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:36.297 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:36.297 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:36.297 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:36.556 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:36.556 01:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.490 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.490 01:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:37.749 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:38.317 00:15:38.317 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.317 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.317 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.576 { 00:15:38.576 "cntlid": 133, 00:15:38.576 "qid": 0, 00:15:38.576 "state": "enabled", 00:15:38.576 "thread": "nvmf_tgt_poll_group_000", 00:15:38.576 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:38.576 "listen_address": { 00:15:38.576 "trtype": "TCP", 00:15:38.576 "adrfam": "IPv4", 00:15:38.576 "traddr": "10.0.0.3", 00:15:38.576 "trsvcid": "4420" 00:15:38.576 }, 00:15:38.576 "peer_address": { 00:15:38.576 "trtype": "TCP", 00:15:38.576 "adrfam": "IPv4", 00:15:38.576 "traddr": "10.0.0.1", 00:15:38.576 "trsvcid": "40254" 00:15:38.576 }, 00:15:38.576 "auth": { 00:15:38.576 "state": "completed", 00:15:38.576 "digest": "sha512", 00:15:38.576 "dhgroup": "ffdhe6144" 00:15:38.576 } 00:15:38.576 } 00:15:38.576 ]' 00:15:38.576 01:35:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.836 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:39.095 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:39.095 01:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:39.751 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.010 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:40.577 00:15:40.577 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.577 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.577 01:35:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.837 { 00:15:40.837 "cntlid": 135, 00:15:40.837 "qid": 0, 00:15:40.837 "state": "enabled", 00:15:40.837 "thread": "nvmf_tgt_poll_group_000", 00:15:40.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:40.837 "listen_address": { 00:15:40.837 "trtype": "TCP", 00:15:40.837 "adrfam": "IPv4", 00:15:40.837 "traddr": "10.0.0.3", 00:15:40.837 "trsvcid": "4420" 00:15:40.837 }, 00:15:40.837 "peer_address": { 00:15:40.837 "trtype": "TCP", 00:15:40.837 "adrfam": "IPv4", 00:15:40.837 "traddr": "10.0.0.1", 00:15:40.837 "trsvcid": "40278" 00:15:40.837 }, 00:15:40.837 "auth": { 00:15:40.837 "state": "completed", 00:15:40.837 "digest": "sha512", 00:15:40.837 "dhgroup": "ffdhe6144" 00:15:40.837 } 00:15:40.837 } 00:15:40.837 ]' 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.837 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:41.404 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:41.404 01:35:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.971 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:41.971 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.231 01:35:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.798 00:15:42.798 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:42.798 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:42.798 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.057 { 00:15:43.057 "cntlid": 137, 00:15:43.057 "qid": 0, 00:15:43.057 "state": "enabled", 00:15:43.057 "thread": "nvmf_tgt_poll_group_000", 00:15:43.057 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:43.057 "listen_address": { 00:15:43.057 "trtype": "TCP", 00:15:43.057 "adrfam": "IPv4", 00:15:43.057 "traddr": "10.0.0.3", 00:15:43.057 "trsvcid": "4420" 00:15:43.057 }, 00:15:43.057 "peer_address": { 00:15:43.057 "trtype": "TCP", 00:15:43.057 "adrfam": "IPv4", 00:15:43.057 "traddr": "10.0.0.1", 00:15:43.057 "trsvcid": "40308" 00:15:43.057 }, 00:15:43.057 "auth": { 00:15:43.057 "state": "completed", 00:15:43.057 "digest": "sha512", 00:15:43.057 "dhgroup": "ffdhe8192" 00:15:43.057 } 00:15:43.057 } 00:15:43.057 ]' 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.057 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:43.316 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:43.316 01:35:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.882 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.882 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:43.883 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.450 01:35:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.019 00:15:45.019 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.019 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.019 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.278 { 00:15:45.278 "cntlid": 139, 00:15:45.278 "qid": 0, 00:15:45.278 "state": "enabled", 00:15:45.278 "thread": "nvmf_tgt_poll_group_000", 00:15:45.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:45.278 "listen_address": { 00:15:45.278 "trtype": "TCP", 00:15:45.278 "adrfam": "IPv4", 00:15:45.278 "traddr": "10.0.0.3", 00:15:45.278 "trsvcid": "4420" 00:15:45.278 }, 00:15:45.278 "peer_address": { 00:15:45.278 "trtype": "TCP", 00:15:45.278 "adrfam": "IPv4", 00:15:45.278 "traddr": "10.0.0.1", 00:15:45.278 "trsvcid": "40344" 00:15:45.278 }, 00:15:45.278 "auth": { 00:15:45.278 "state": "completed", 00:15:45.278 "digest": "sha512", 00:15:45.278 "dhgroup": "ffdhe8192" 00:15:45.278 } 00:15:45.278 } 00:15:45.278 ]' 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.278 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.537 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:45.537 01:35:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: --dhchap-ctrl-secret DHHC-1:02:YmNiYjViODdiMWJkOGVmZmRlYzU5NWFhYzdmMjlmODFmYzk2NjQ5NGZiMWYzZTMyLfawUw==: 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.472 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.731 01:35:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.298 00:15:47.298 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.298 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.298 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.557 { 00:15:47.557 "cntlid": 141, 00:15:47.557 "qid": 0, 00:15:47.557 "state": "enabled", 00:15:47.557 "thread": "nvmf_tgt_poll_group_000", 00:15:47.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:47.557 "listen_address": { 00:15:47.557 "trtype": "TCP", 00:15:47.557 "adrfam": "IPv4", 00:15:47.557 "traddr": "10.0.0.3", 00:15:47.557 "trsvcid": "4420" 00:15:47.557 }, 00:15:47.557 "peer_address": { 00:15:47.557 "trtype": "TCP", 00:15:47.557 "adrfam": "IPv4", 00:15:47.557 "traddr": "10.0.0.1", 00:15:47.557 "trsvcid": "51690" 00:15:47.557 }, 00:15:47.557 "auth": { 00:15:47.557 "state": "completed", 00:15:47.557 "digest": "sha512", 00:15:47.557 "dhgroup": "ffdhe8192" 00:15:47.557 } 00:15:47.557 } 00:15:47.557 ]' 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:47.557 01:35:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:47.815 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:47.815 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:47.815 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.074 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:48.074 01:35:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:01:Mzc1ZjBjYmRiMjE3MDdhYzZhODA2MTU3NmJlNWVjM2QWtGgy: 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.641 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:15:48.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:15:48.899 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.900 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.158 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.158 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:49.158 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.158 01:35:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.725 00:15:49.725 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.725 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.725 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.984 { 00:15:49.984 "cntlid": 143, 00:15:49.984 "qid": 0, 00:15:49.984 "state": "enabled", 00:15:49.984 "thread": "nvmf_tgt_poll_group_000", 00:15:49.984 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:49.984 "listen_address": { 00:15:49.984 "trtype": "TCP", 00:15:49.984 "adrfam": "IPv4", 00:15:49.984 "traddr": "10.0.0.3", 00:15:49.984 "trsvcid": "4420" 00:15:49.984 }, 00:15:49.984 "peer_address": { 00:15:49.984 "trtype": "TCP", 00:15:49.984 "adrfam": "IPv4", 00:15:49.984 "traddr": "10.0.0.1", 00:15:49.984 "trsvcid": "51720" 00:15:49.984 }, 00:15:49.984 "auth": { 00:15:49.984 "state": "completed", 00:15:49.984 "digest": "sha512", 00:15:49.984 "dhgroup": "ffdhe8192" 00:15:49.984 } 00:15:49.984 } 00:15:49.984 ]' 00:15:49.984 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.243 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.809 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:50.809 01:35:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.374 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.374 01:35:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:51.632 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.568 00:15:52.568 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.568 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.568 01:36:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.827 { 00:15:52.827 "cntlid": 145, 00:15:52.827 "qid": 0, 00:15:52.827 "state": "enabled", 00:15:52.827 "thread": "nvmf_tgt_poll_group_000", 00:15:52.827 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:52.827 "listen_address": { 00:15:52.827 "trtype": "TCP", 00:15:52.827 "adrfam": "IPv4", 00:15:52.827 "traddr": "10.0.0.3", 00:15:52.827 "trsvcid": "4420" 00:15:52.827 }, 00:15:52.827 "peer_address": { 00:15:52.827 "trtype": "TCP", 00:15:52.827 "adrfam": "IPv4", 00:15:52.827 "traddr": "10.0.0.1", 00:15:52.827 "trsvcid": "51742" 00:15:52.827 }, 00:15:52.827 "auth": { 00:15:52.827 "state": "completed", 00:15:52.827 "digest": "sha512", 00:15:52.827 "dhgroup": "ffdhe8192" 00:15:52.827 } 00:15:52.827 } 00:15:52.827 ]' 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.827 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.394 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:53.394 01:36:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:00:ZmIwNWE3YzA4YzJiMGY5YTViNjNiMTcxYzVlMTg5ZDM5ODc1YjkyNGEzMWYyZjhkKjsX8Q==: --dhchap-ctrl-secret DHHC-1:03:ODljYjE0NWM4Y2JmZDZjMTQ0N2YzMmI3ZWQ3Zjg3YTFmOWRmZTRjMThmNzJmNmUxZWUzMTlmM2ExMGE0MWE0YRh3EJg=: 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.962 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:53.962 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:15:54.530 request: 00:15:54.530 { 00:15:54.530 "name": "nvme0", 00:15:54.530 "trtype": "tcp", 00:15:54.530 "traddr": "10.0.0.3", 00:15:54.530 "adrfam": "ipv4", 00:15:54.530 "trsvcid": "4420", 00:15:54.530 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:54.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:54.530 "prchk_reftag": false, 00:15:54.530 "prchk_guard": false, 00:15:54.530 "hdgst": false, 00:15:54.530 "ddgst": false, 00:15:54.530 "dhchap_key": "key2", 00:15:54.530 "allow_unrecognized_csi": false, 00:15:54.530 "method": "bdev_nvme_attach_controller", 00:15:54.530 "req_id": 1 00:15:54.530 } 00:15:54.530 Got JSON-RPC error response 00:15:54.530 response: 00:15:54.530 { 00:15:54.530 "code": -5, 00:15:54.530 "message": "Input/output error" 00:15:54.530 } 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:54.530 01:36:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:15:55.098 request: 00:15:55.098 { 00:15:55.098 "name": "nvme0", 00:15:55.098 "trtype": "tcp", 00:15:55.098 "traddr": "10.0.0.3", 00:15:55.098 "adrfam": "ipv4", 00:15:55.098 "trsvcid": "4420", 00:15:55.098 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:55.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:55.098 "prchk_reftag": false, 00:15:55.098 "prchk_guard": false, 00:15:55.098 "hdgst": false, 00:15:55.098 "ddgst": false, 00:15:55.098 "dhchap_key": "key1", 00:15:55.098 "dhchap_ctrlr_key": "ckey2", 00:15:55.098 "allow_unrecognized_csi": false, 00:15:55.098 "method": "bdev_nvme_attach_controller", 00:15:55.098 "req_id": 1 00:15:55.098 } 00:15:55.098 Got JSON-RPC error response 00:15:55.098 response: 00:15:55.098 { 00:15:55.098 "code": -5, 00:15:55.098 "message": "Input/output error" 00:15:55.098 } 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.098 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:15:55.358 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:55.358 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.358 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.358 01:36:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.925 request: 00:15:55.925 { 00:15:55.925 "name": "nvme0", 00:15:55.925 "trtype": "tcp", 00:15:55.925 "traddr": "10.0.0.3", 00:15:55.925 "adrfam": "ipv4", 00:15:55.925 "trsvcid": "4420", 00:15:55.925 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:15:55.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:15:55.925 "prchk_reftag": false, 00:15:55.925 "prchk_guard": false, 00:15:55.925 "hdgst": false, 00:15:55.925 "ddgst": false, 00:15:55.925 "dhchap_key": "key1", 00:15:55.925 "dhchap_ctrlr_key": "ckey1", 00:15:55.925 "allow_unrecognized_csi": false, 00:15:55.925 "method": "bdev_nvme_attach_controller", 00:15:55.925 "req_id": 1 00:15:55.925 } 00:15:55.925 Got JSON-RPC error response 00:15:55.925 response: 00:15:55.925 { 00:15:55.925 "code": -5, 00:15:55.925 "message": "Input/output error" 00:15:55.925 } 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 69746 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69746 ']' 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69746 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69746 00:15:55.925 killing process with pid 69746 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69746' 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69746 00:15:55.925 01:36:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69746 00:15:56.862 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:15:56.862 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:56.862 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:56.862 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.120 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=72840 00:15:57.120 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 72840 00:15:57.120 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 72840 ']' 00:15:57.120 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:15:57.121 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.121 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.121 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.121 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.121 01:36:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:58.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 72840 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 72840 ']' 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:58.057 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.317 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:58.317 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:58.317 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:15:58.317 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.317 01:36:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.576 null0 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.cex 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.bVq ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.bVq 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.EON 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.fcs ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.fcs 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.835 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.UDa 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.d7T ]] 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.d7T 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.LKh 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.836 01:36:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:59.772 nvme0n1 00:15:59.772 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:59.772 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:59.772 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.031 { 00:16:00.031 "cntlid": 1, 00:16:00.031 "qid": 0, 00:16:00.031 "state": "enabled", 00:16:00.031 "thread": "nvmf_tgt_poll_group_000", 00:16:00.031 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:16:00.031 "listen_address": { 00:16:00.031 "trtype": "TCP", 00:16:00.031 "adrfam": "IPv4", 00:16:00.031 "traddr": "10.0.0.3", 00:16:00.031 "trsvcid": "4420" 00:16:00.031 }, 00:16:00.031 "peer_address": { 00:16:00.031 "trtype": "TCP", 00:16:00.031 "adrfam": "IPv4", 00:16:00.031 "traddr": "10.0.0.1", 00:16:00.031 "trsvcid": "39004" 00:16:00.031 }, 00:16:00.031 "auth": { 00:16:00.031 "state": "completed", 00:16:00.031 "digest": "sha512", 00:16:00.031 "dhgroup": "ffdhe8192" 00:16:00.031 } 00:16:00.031 } 00:16:00.031 ]' 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:00.031 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.290 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.290 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.290 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.290 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.290 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.551 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:16:00.551 01:36:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.153 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key3 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.153 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.422 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.422 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:01.422 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.681 01:36:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.944 request: 00:16:01.944 { 00:16:01.944 "name": "nvme0", 00:16:01.944 "trtype": "tcp", 00:16:01.944 "traddr": "10.0.0.3", 00:16:01.944 "adrfam": "ipv4", 00:16:01.944 "trsvcid": "4420", 00:16:01.944 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:01.944 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:16:01.944 "prchk_reftag": false, 00:16:01.944 "prchk_guard": false, 00:16:01.944 "hdgst": false, 00:16:01.944 "ddgst": false, 00:16:01.944 "dhchap_key": "key3", 00:16:01.944 "allow_unrecognized_csi": false, 00:16:01.944 "method": "bdev_nvme_attach_controller", 00:16:01.944 "req_id": 1 00:16:01.944 } 00:16:01.944 Got JSON-RPC error response 00:16:01.944 response: 00:16:01.944 { 00:16:01.944 "code": -5, 00:16:01.944 "message": "Input/output error" 00:16:01.944 } 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:01.944 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.205 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:02.464 request: 00:16:02.464 { 00:16:02.464 "name": "nvme0", 00:16:02.464 "trtype": "tcp", 00:16:02.464 "traddr": "10.0.0.3", 00:16:02.464 "adrfam": "ipv4", 00:16:02.464 "trsvcid": "4420", 00:16:02.464 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:02.464 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:16:02.464 "prchk_reftag": false, 00:16:02.464 "prchk_guard": false, 00:16:02.464 "hdgst": false, 00:16:02.464 "ddgst": false, 00:16:02.464 "dhchap_key": "key3", 00:16:02.464 "allow_unrecognized_csi": false, 00:16:02.464 "method": "bdev_nvme_attach_controller", 00:16:02.464 "req_id": 1 00:16:02.464 } 00:16:02.464 Got JSON-RPC error response 00:16:02.464 response: 00:16:02.464 { 00:16:02.464 "code": -5, 00:16:02.464 "message": "Input/output error" 00:16:02.464 } 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:02.464 01:36:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:02.724 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:03.291 request: 00:16:03.291 { 00:16:03.291 "name": "nvme0", 00:16:03.291 "trtype": "tcp", 00:16:03.291 "traddr": "10.0.0.3", 00:16:03.291 "adrfam": "ipv4", 00:16:03.291 "trsvcid": "4420", 00:16:03.291 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:03.291 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:16:03.291 "prchk_reftag": false, 00:16:03.291 "prchk_guard": false, 00:16:03.291 "hdgst": false, 00:16:03.291 "ddgst": false, 00:16:03.291 "dhchap_key": "key0", 00:16:03.291 "dhchap_ctrlr_key": "key1", 00:16:03.292 "allow_unrecognized_csi": false, 00:16:03.292 "method": "bdev_nvme_attach_controller", 00:16:03.292 "req_id": 1 00:16:03.292 } 00:16:03.292 Got JSON-RPC error response 00:16:03.292 response: 00:16:03.292 { 00:16:03.292 "code": -5, 00:16:03.292 "message": "Input/output error" 00:16:03.292 } 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:03.292 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:03.550 nvme0n1 00:16:03.550 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:03.550 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.550 01:36:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:03.809 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.809 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.809 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.067 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 00:16:04.068 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.068 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.068 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.068 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:04.068 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:04.068 01:36:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:05.004 nvme0n1 00:16:05.004 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:05.004 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:05.004 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:05.263 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.522 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.522 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:16:05.522 01:36:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.3 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid 5af99618-86f8-46bf-8130-da23f42c5a81 -l 0 --dhchap-secret DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: --dhchap-ctrl-secret DHHC-1:03:N2Y3MGQyMDA4ZGYxYmVkMGI2OTNkYjNkMzRhNmUyOTBjYTk0YWIxN2I1ZjE2YzAxZjM1OTgyZGMwYzliYjliZBI/Qw4=: 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:06.458 01:36:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:07.025 request: 00:16:07.025 { 00:16:07.025 "name": "nvme0", 00:16:07.025 "trtype": "tcp", 00:16:07.025 "traddr": "10.0.0.3", 00:16:07.025 "adrfam": "ipv4", 00:16:07.025 "trsvcid": "4420", 00:16:07.025 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:07.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81", 00:16:07.025 "prchk_reftag": false, 00:16:07.025 "prchk_guard": false, 00:16:07.025 "hdgst": false, 00:16:07.025 "ddgst": false, 00:16:07.025 "dhchap_key": "key1", 00:16:07.025 "allow_unrecognized_csi": false, 00:16:07.025 "method": "bdev_nvme_attach_controller", 00:16:07.025 "req_id": 1 00:16:07.025 } 00:16:07.025 Got JSON-RPC error response 00:16:07.025 response: 00:16:07.025 { 00:16:07.025 "code": -5, 00:16:07.025 "message": "Input/output error" 00:16:07.025 } 00:16:07.025 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:07.025 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.025 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.025 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.025 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:07.026 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:07.026 01:36:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:07.962 nvme0n1 00:16:07.962 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:07.962 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.962 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:08.531 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.531 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.531 01:36:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:08.790 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:09.049 nvme0n1 00:16:09.049 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:09.049 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.049 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:09.308 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.308 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.308 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: '' 2s 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: ]] 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:OTRjMmEyOGRlOWMyZTM2M2ZmOThkZjdlZGY2ZjYyYTj4MDrd: 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:09.567 01:36:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key1 --dhchap-ctrlr-key key2 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: 2s 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:16:12.098 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: ]] 00:16:12.099 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:YTc4MDBhMjhhYTBlY2EyNDRmYmIxNjgyNzA3OGQwZGJiNmNmNGI1YTkzYmYzNDVlPb9ppw==: 00:16:12.099 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:12.099 01:36:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:16:14.002 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:16:14.002 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:16:14.002 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:16:14.002 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:16:14.003 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:16:14.003 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:16:14.003 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:16:14.003 01:36:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:14.003 01:36:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:14.939 nvme0n1 00:16:14.939 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:14.939 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.939 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.939 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.939 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:14.939 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:15.506 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:16:15.506 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.506 01:36:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:16:15.765 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:16:16.024 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:16:16.024 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.024 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:16:16.282 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:16.283 01:36:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:17.225 request: 00:16:17.225 { 00:16:17.225 "name": "nvme0", 00:16:17.225 "dhchap_key": "key1", 00:16:17.225 "dhchap_ctrlr_key": "key3", 00:16:17.225 "method": "bdev_nvme_set_keys", 00:16:17.225 "req_id": 1 00:16:17.225 } 00:16:17.225 Got JSON-RPC error response 00:16:17.225 response: 00:16:17.225 { 00:16:17.225 "code": -13, 00:16:17.225 "message": "Permission denied" 00:16:17.225 } 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.225 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:17.498 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:16:17.498 01:36:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:16:18.434 01:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:16:18.434 01:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:16:18.434 01:36:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:18.692 01:36:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.3 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:16:19.627 nvme0n1 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:19.627 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:16:20.562 request: 00:16:20.562 { 00:16:20.562 "name": "nvme0", 00:16:20.562 "dhchap_key": "key2", 00:16:20.562 "dhchap_ctrlr_key": "key0", 00:16:20.562 "method": "bdev_nvme_set_keys", 00:16:20.562 "req_id": 1 00:16:20.562 } 00:16:20.562 Got JSON-RPC error response 00:16:20.562 response: 00:16:20.562 { 00:16:20.562 "code": -13, 00:16:20.562 "message": "Permission denied" 00:16:20.562 } 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:20.562 01:36:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.821 01:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:16:20.821 01:36:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:16:21.756 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:16:21.756 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:16:21.756 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 69784 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 69784 ']' 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 69784 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69784 00:16:22.014 killing process with pid 69784 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69784' 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 69784 00:16:22.014 01:36:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 69784 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:23.916 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:23.916 rmmod nvme_tcp 00:16:23.916 rmmod nvme_fabrics 00:16:24.176 rmmod nvme_keyring 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 72840 ']' 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 72840 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 72840 ']' 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 72840 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72840 00:16:24.176 killing process with pid 72840 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72840' 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 72840 00:16:24.176 01:36:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 72840 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.113 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@300 -- # return 0 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.cex /tmp/spdk.key-sha256.EON /tmp/spdk.key-sha384.UDa /tmp/spdk.key-sha512.LKh /tmp/spdk.key-sha512.bVq /tmp/spdk.key-sha384.fcs /tmp/spdk.key-sha256.d7T '' /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log /home/vagrant/spdk_repo/spdk/../output/nvmf-auth.log 00:16:25.373 00:16:25.373 real 3m18.212s 00:16:25.373 user 7m52.978s 00:16:25.373 sys 0m28.242s 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.373 ************************************ 00:16:25.373 END TEST nvmf_auth_target 00:16:25.373 ************************************ 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:25.373 ************************************ 00:16:25.373 START TEST nvmf_bdevio_no_huge 00:16:25.373 ************************************ 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:16:25.373 * Looking for test storage... 00:16:25.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:16:25.373 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:25.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:25.633 --rc genhtml_branch_coverage=1 00:16:25.633 --rc genhtml_function_coverage=1 00:16:25.633 --rc genhtml_legend=1 00:16:25.633 --rc geninfo_all_blocks=1 00:16:25.633 --rc geninfo_unexecuted_blocks=1 00:16:25.633 00:16:25.633 ' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:25.633 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:25.633 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:25.634 Cannot find device "nvmf_init_br" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:25.634 Cannot find device "nvmf_init_br2" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:25.634 Cannot find device "nvmf_tgt_br" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@164 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:25.634 Cannot find device "nvmf_tgt_br2" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@165 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:25.634 Cannot find device "nvmf_init_br" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@166 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:25.634 Cannot find device "nvmf_init_br2" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@167 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:25.634 Cannot find device "nvmf_tgt_br" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@168 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:25.634 Cannot find device "nvmf_tgt_br2" 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # true 00:16:25.634 01:36:33 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:25.634 Cannot find device "nvmf_br" 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@170 -- # true 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:25.634 Cannot find device "nvmf_init_if" 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # true 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:25.634 Cannot find device "nvmf_init_if2" 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # true 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:25.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@173 -- # true 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:25.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@174 -- # true 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:25.634 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:25.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:25.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.109 ms 00:16:25.893 00:16:25.893 --- 10.0.0.3 ping statistics --- 00:16:25.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.893 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:25.893 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:25.893 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.057 ms 00:16:25.893 00:16:25.893 --- 10.0.0.4 ping statistics --- 00:16:25.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.893 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:25.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:25.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:16:25.893 00:16:25.893 --- 10.0.0.1 ping statistics --- 00:16:25.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.893 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:16:25.893 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:25.894 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:25.894 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.046 ms 00:16:25.894 00:16:25.894 --- 10.0.0.2 ping statistics --- 00:16:25.894 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:25.894 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@461 -- # return 0 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=73517 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 73517 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 73517 ']' 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.894 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.894 01:36:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:26.152 [2024-11-17 01:36:34.423061] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:26.152 [2024-11-17 01:36:34.423232] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:16:26.411 [2024-11-17 01:36:34.645006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:26.411 [2024-11-17 01:36:34.816063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:26.411 [2024-11-17 01:36:34.816453] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:26.411 [2024-11-17 01:36:34.816487] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:26.411 [2024-11-17 01:36:34.816507] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:26.411 [2024-11-17 01:36:34.816521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:26.411 [2024-11-17 01:36:34.818322] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:16:26.411 [2024-11-17 01:36:34.818477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:16:26.411 [2024-11-17 01:36:34.818955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:16:26.411 [2024-11-17 01:36:34.818960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:26.669 [2024-11-17 01:36:34.987900] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:26.927 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.927 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:16:26.927 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:26.927 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:26.927 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.185 [2024-11-17 01:36:35.397889] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.185 Malloc0 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.185 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:27.186 [2024-11-17 01:36:35.489007] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:16:27.186 { 00:16:27.186 "params": { 00:16:27.186 "name": "Nvme$subsystem", 00:16:27.186 "trtype": "$TEST_TRANSPORT", 00:16:27.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:27.186 "adrfam": "ipv4", 00:16:27.186 "trsvcid": "$NVMF_PORT", 00:16:27.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:27.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:27.186 "hdgst": ${hdgst:-false}, 00:16:27.186 "ddgst": ${ddgst:-false} 00:16:27.186 }, 00:16:27.186 "method": "bdev_nvme_attach_controller" 00:16:27.186 } 00:16:27.186 EOF 00:16:27.186 )") 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:16:27.186 01:36:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:16:27.186 "params": { 00:16:27.186 "name": "Nvme1", 00:16:27.186 "trtype": "tcp", 00:16:27.186 "traddr": "10.0.0.3", 00:16:27.186 "adrfam": "ipv4", 00:16:27.186 "trsvcid": "4420", 00:16:27.186 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:27.186 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:27.186 "hdgst": false, 00:16:27.186 "ddgst": false 00:16:27.186 }, 00:16:27.186 "method": "bdev_nvme_attach_controller" 00:16:27.186 }' 00:16:27.186 [2024-11-17 01:36:35.586725] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:27.186 [2024-11-17 01:36:35.587095] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid73553 ] 00:16:27.445 [2024-11-17 01:36:35.783291] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:27.703 [2024-11-17 01:36:35.906618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:27.703 [2024-11-17 01:36:35.906688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:27.703 [2024-11-17 01:36:35.906693] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:27.703 [2024-11-17 01:36:36.069331] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:27.960 I/O targets: 00:16:27.960 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:16:27.960 00:16:27.960 00:16:27.960 CUnit - A unit testing framework for C - Version 2.1-3 00:16:27.960 http://cunit.sourceforge.net/ 00:16:27.960 00:16:27.960 00:16:27.960 Suite: bdevio tests on: Nvme1n1 00:16:27.960 Test: blockdev write read block ...passed 00:16:27.960 Test: blockdev write zeroes read block ...passed 00:16:27.960 Test: blockdev write zeroes read no split ...passed 00:16:27.960 Test: blockdev write zeroes read split ...passed 00:16:28.218 Test: blockdev write zeroes read split partial ...passed 00:16:28.218 Test: blockdev reset ...[2024-11-17 01:36:36.423335] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:16:28.218 [2024-11-17 01:36:36.423672] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000029c00 (9): Bad file descriptor 00:16:28.218 [2024-11-17 01:36:36.436234] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:16:28.218 passed 00:16:28.218 Test: blockdev write read 8 blocks ...passed 00:16:28.218 Test: blockdev write read size > 128k ...passed 00:16:28.218 Test: blockdev write read invalid size ...passed 00:16:28.218 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:16:28.218 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:16:28.218 Test: blockdev write read max offset ...passed 00:16:28.218 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:16:28.218 Test: blockdev writev readv 8 blocks ...passed 00:16:28.218 Test: blockdev writev readv 30 x 1block ...passed 00:16:28.218 Test: blockdev writev readv block ...passed 00:16:28.218 Test: blockdev writev readv size > 128k ...passed 00:16:28.218 Test: blockdev writev readv size > 128k in two iovs ...passed 00:16:28.218 Test: blockdev comparev and writev ...[2024-11-17 01:36:36.452861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.452984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.453054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.453102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.453616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.453689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.454235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.454270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.454896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.454954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.454992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.455023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.455436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.455479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.455512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:16:28.218 [2024-11-17 01:36:36.455927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:16:28.218 passed 00:16:28.218 Test: blockdev nvme passthru rw ...passed 00:16:28.218 Test: blockdev nvme passthru vendor specific ...[2024-11-17 01:36:36.457392] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.218 [2024-11-17 01:36:36.457565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.457768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.218 [2024-11-17 01:36:36.458045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:16:28.218 passed 00:16:28.218 Test: blockdev nvme admin passthru ...[2024-11-17 01:36:36.458905] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.218 [2024-11-17 01:36:36.458963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:16:28.218 [2024-11-17 01:36:36.459136] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:16:28.218 [2024-11-17 01:36:36.459178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:16:28.218 passed 00:16:28.218 Test: blockdev copy ...passed 00:16:28.218 00:16:28.218 Run Summary: Type Total Ran Passed Failed Inactive 00:16:28.219 suites 1 1 n/a 0 0 00:16:28.219 tests 23 23 23 0 0 00:16:28.219 asserts 152 152 152 0 n/a 00:16:28.219 00:16:28.219 Elapsed time = 0.242 seconds 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:28.785 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:28.785 rmmod nvme_tcp 00:16:28.785 rmmod nvme_fabrics 00:16:29.044 rmmod nvme_keyring 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 73517 ']' 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 73517 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 73517 ']' 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 73517 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73517 00:16:29.044 killing process with pid 73517 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73517' 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 73517 00:16:29.044 01:36:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 73517 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@246 -- # remove_spdk_ns 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@300 -- # return 0 00:16:29.980 00:16:29.980 real 0m4.690s 00:16:29.980 user 0m15.806s 00:16:29.980 sys 0m1.473s 00:16:29.980 ************************************ 00:16:29.980 END TEST nvmf_bdevio_no_huge 00:16:29.980 ************************************ 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:29.980 ************************************ 00:16:29.980 START TEST nvmf_tls 00:16:29.980 ************************************ 00:16:29.980 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:16:30.240 * Looking for test storage... 00:16:30.240 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.240 --rc genhtml_branch_coverage=1 00:16:30.240 --rc genhtml_function_coverage=1 00:16:30.240 --rc genhtml_legend=1 00:16:30.240 --rc geninfo_all_blocks=1 00:16:30.240 --rc geninfo_unexecuted_blocks=1 00:16:30.240 00:16:30.240 ' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.240 --rc genhtml_branch_coverage=1 00:16:30.240 --rc genhtml_function_coverage=1 00:16:30.240 --rc genhtml_legend=1 00:16:30.240 --rc geninfo_all_blocks=1 00:16:30.240 --rc geninfo_unexecuted_blocks=1 00:16:30.240 00:16:30.240 ' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.240 --rc genhtml_branch_coverage=1 00:16:30.240 --rc genhtml_function_coverage=1 00:16:30.240 --rc genhtml_legend=1 00:16:30.240 --rc geninfo_all_blocks=1 00:16:30.240 --rc geninfo_unexecuted_blocks=1 00:16:30.240 00:16:30.240 ' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.240 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.240 --rc genhtml_branch_coverage=1 00:16:30.240 --rc genhtml_function_coverage=1 00:16:30.240 --rc genhtml_legend=1 00:16:30.240 --rc geninfo_all_blocks=1 00:16:30.240 --rc geninfo_unexecuted_blocks=1 00:16:30.240 00:16:30.240 ' 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:16:30.240 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:30.241 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@460 -- # nvmf_veth_init 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:16:30.241 Cannot find device "nvmf_init_br" 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # true 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:16:30.241 Cannot find device "nvmf_init_br2" 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # true 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:16:30.241 Cannot find device "nvmf_tgt_br" 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@164 -- # true 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:16:30.241 Cannot find device "nvmf_tgt_br2" 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@165 -- # true 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:16:30.241 Cannot find device "nvmf_init_br" 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@166 -- # true 00:16:30.241 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:16:30.500 Cannot find device "nvmf_init_br2" 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@167 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:16:30.500 Cannot find device "nvmf_tgt_br" 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@168 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:16:30.500 Cannot find device "nvmf_tgt_br2" 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:16:30.500 Cannot find device "nvmf_br" 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@170 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:16:30.500 Cannot find device "nvmf_init_if" 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:16:30.500 Cannot find device "nvmf_init_if2" 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:16:30.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@173 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:16:30.500 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@174 -- # true 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:16:30.500 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:16:30.760 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:16:30.760 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.105 ms 00:16:30.760 00:16:30.760 --- 10.0.0.3 ping statistics --- 00:16:30.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.760 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:16:30.760 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:16:30.760 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.051 ms 00:16:30.760 00:16:30.760 --- 10.0.0.4 ping statistics --- 00:16:30.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.760 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:16:30.760 01:36:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:16:30.760 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:30.760 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:16:30.760 00:16:30.760 --- 10.0.0.1 ping statistics --- 00:16:30.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.760 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:16:30.760 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:30.760 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:16:30.760 00:16:30.760 --- 10.0.0.2 ping statistics --- 00:16:30.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:30.760 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@461 -- # return 0 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=73805 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 73805 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 73805 ']' 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.760 01:36:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:30.760 [2024-11-17 01:36:39.163236] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:30.760 [2024-11-17 01:36:39.163432] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:31.019 [2024-11-17 01:36:39.350886] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.019 [2024-11-17 01:36:39.463096] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:31.019 [2024-11-17 01:36:39.463396] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:31.019 [2024-11-17 01:36:39.463445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:31.019 [2024-11-17 01:36:39.463476] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:31.019 [2024-11-17 01:36:39.463494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:31.019 [2024-11-17 01:36:39.464982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:16:31.956 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:16:32.215 true 00:16:32.215 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.215 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:16:32.474 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:16:32.474 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:16:32.474 01:36:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:32.733 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:32.733 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:16:32.992 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:16:32.992 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:16:32.992 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:16:33.250 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.250 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:16:33.509 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:16:33.509 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:16:33.509 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:33.509 01:36:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:16:33.799 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:16:33.799 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:16:33.799 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:16:34.057 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.057 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:16:34.317 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:16:34.317 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:16:34.317 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:16:34.576 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:16:34.576 01:36:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:16:34.835 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:34.836 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.eElvJ8tHlH 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.S2LvEcDQDE 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.eElvJ8tHlH 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.S2LvEcDQDE 00:16:35.095 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:16:35.354 01:36:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:16:35.923 [2024-11-17 01:36:44.076798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:35.923 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.eElvJ8tHlH 00:16:35.923 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.eElvJ8tHlH 00:16:35.923 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:16:36.183 [2024-11-17 01:36:44.450412] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:36.183 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:16:36.442 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:16:36.702 [2024-11-17 01:36:44.938633] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:16:36.702 [2024-11-17 01:36:44.939060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:16:36.702 01:36:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:16:36.961 malloc0 00:16:36.961 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:37.220 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.eElvJ8tHlH 00:16:37.479 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:16:37.738 01:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.eElvJ8tHlH 00:16:49.948 Initializing NVMe Controllers 00:16:49.948 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:16:49.948 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:49.948 Initialization complete. Launching workers. 00:16:49.948 ======================================================== 00:16:49.948 Latency(us) 00:16:49.948 Device Information : IOPS MiB/s Average min max 00:16:49.948 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6947.56 27.14 9215.28 2260.16 11405.18 00:16:49.948 ======================================================== 00:16:49.948 Total : 6947.56 27.14 9215.28 2260.16 11405.18 00:16:49.948 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eElvJ8tHlH 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eElvJ8tHlH 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74051 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74051 /var/tmp/bdevperf.sock 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74051 ']' 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:49.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:49.948 01:36:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:16:49.948 [2024-11-17 01:36:56.392554] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:16:49.948 [2024-11-17 01:36:56.393196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74051 ] 00:16:49.948 [2024-11-17 01:36:56.573835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:49.948 [2024-11-17 01:36:56.670622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:49.948 [2024-11-17 01:36:56.846386] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:16:49.948 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:49.948 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:16:49.948 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eElvJ8tHlH 00:16:49.948 01:36:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:16:49.948 [2024-11-17 01:36:57.905126] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:16:49.948 TLSTESTn1 00:16:49.948 01:36:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:16:49.948 Running I/O for 10 seconds... 00:16:51.823 2880.00 IOPS, 11.25 MiB/s [2024-11-17T01:37:01.296Z] 2893.50 IOPS, 11.30 MiB/s [2024-11-17T01:37:02.232Z] 2934.00 IOPS, 11.46 MiB/s [2024-11-17T01:37:03.169Z] 2957.00 IOPS, 11.55 MiB/s [2024-11-17T01:37:04.547Z] 2966.20 IOPS, 11.59 MiB/s [2024-11-17T01:37:05.485Z] 2971.50 IOPS, 11.61 MiB/s [2024-11-17T01:37:06.422Z] 2983.00 IOPS, 11.65 MiB/s [2024-11-17T01:37:07.361Z] 2988.12 IOPS, 11.67 MiB/s [2024-11-17T01:37:08.297Z] 2990.11 IOPS, 11.68 MiB/s [2024-11-17T01:37:08.297Z] 2992.50 IOPS, 11.69 MiB/s 00:16:59.838 Latency(us) 00:16:59.838 [2024-11-17T01:37:08.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.838 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:16:59.838 Verification LBA range: start 0x0 length 0x2000 00:16:59.838 TLSTESTn1 : 10.02 2997.73 11.71 0.00 0.00 42613.05 8400.52 38844.97 00:16:59.838 [2024-11-17T01:37:08.297Z] =================================================================================================================== 00:16:59.838 [2024-11-17T01:37:08.297Z] Total : 2997.73 11.71 0.00 0.00 42613.05 8400.52 38844.97 00:16:59.838 { 00:16:59.838 "results": [ 00:16:59.838 { 00:16:59.838 "job": "TLSTESTn1", 00:16:59.838 "core_mask": "0x4", 00:16:59.838 "workload": "verify", 00:16:59.838 "status": "finished", 00:16:59.838 "verify_range": { 00:16:59.838 "start": 0, 00:16:59.838 "length": 8192 00:16:59.838 }, 00:16:59.838 "queue_depth": 128, 00:16:59.838 "io_size": 4096, 00:16:59.838 "runtime": 10.023917, 00:16:59.838 "iops": 2997.7303283736287, 00:16:59.838 "mibps": 11.709884095209487, 00:16:59.838 "io_failed": 0, 00:16:59.838 "io_timeout": 0, 00:16:59.838 "avg_latency_us": 42613.05017755847, 00:16:59.838 "min_latency_us": 8400.523636363636, 00:16:59.838 "max_latency_us": 38844.97454545455 00:16:59.838 } 00:16:59.838 ], 00:16:59.838 "core_count": 1 00:16:59.838 } 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74051 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74051 ']' 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74051 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74051 00:16:59.838 killing process with pid 74051 00:16:59.838 Received shutdown signal, test time was about 10.000000 seconds 00:16:59.838 00:16:59.838 Latency(us) 00:16:59.838 [2024-11-17T01:37:08.297Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:59.838 [2024-11-17T01:37:08.297Z] =================================================================================================================== 00:16:59.838 [2024-11-17T01:37:08.297Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74051' 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74051 00:16:59.838 01:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74051 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.S2LvEcDQDE 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.S2LvEcDQDE 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.S2LvEcDQDE 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.S2LvEcDQDE 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74196 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74196 /var/tmp/bdevperf.sock 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74196 ']' 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:00.776 01:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:01.035 [2024-11-17 01:37:09.320515] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:01.035 [2024-11-17 01:37:09.320690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74196 ] 00:17:01.294 [2024-11-17 01:37:09.502819] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:01.294 [2024-11-17 01:37:09.601089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:01.553 [2024-11-17 01:37:09.773060] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:01.812 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:01.812 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:01.812 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.S2LvEcDQDE 00:17:02.071 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:02.331 [2024-11-17 01:37:10.715814] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:02.331 [2024-11-17 01:37:10.724272] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:02.331 [2024-11-17 01:37:10.724329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:02.331 [2024-11-17 01:37:10.725293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:02.331 [2024-11-17 01:37:10.726289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:02.331 [2024-11-17 01:37:10.726344] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:02.331 [2024-11-17 01:37:10.726384] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:02.331 [2024-11-17 01:37:10.726405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:02.331 request: 00:17:02.331 { 00:17:02.331 "name": "TLSTEST", 00:17:02.331 "trtype": "tcp", 00:17:02.331 "traddr": "10.0.0.3", 00:17:02.331 "adrfam": "ipv4", 00:17:02.331 "trsvcid": "4420", 00:17:02.331 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:02.331 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:02.331 "prchk_reftag": false, 00:17:02.331 "prchk_guard": false, 00:17:02.331 "hdgst": false, 00:17:02.331 "ddgst": false, 00:17:02.331 "psk": "key0", 00:17:02.331 "allow_unrecognized_csi": false, 00:17:02.331 "method": "bdev_nvme_attach_controller", 00:17:02.331 "req_id": 1 00:17:02.331 } 00:17:02.331 Got JSON-RPC error response 00:17:02.331 response: 00:17:02.331 { 00:17:02.331 "code": -5, 00:17:02.331 "message": "Input/output error" 00:17:02.331 } 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74196 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74196 ']' 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74196 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74196 00:17:02.331 killing process with pid 74196 00:17:02.331 Received shutdown signal, test time was about 10.000000 seconds 00:17:02.331 00:17:02.331 Latency(us) 00:17:02.331 [2024-11-17T01:37:10.790Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:02.331 [2024-11-17T01:37:10.790Z] =================================================================================================================== 00:17:02.331 [2024-11-17T01:37:10.790Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74196' 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74196 00:17:02.331 01:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74196 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eElvJ8tHlH 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eElvJ8tHlH 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.eElvJ8tHlH 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eElvJ8tHlH 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74237 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74237 /var/tmp/bdevperf.sock 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74237 ']' 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:03.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:03.268 01:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:03.268 [2024-11-17 01:37:11.645848] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:03.268 [2024-11-17 01:37:11.646512] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74237 ] 00:17:03.527 [2024-11-17 01:37:11.827535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:03.527 [2024-11-17 01:37:11.908643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:03.786 [2024-11-17 01:37:12.055536] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:04.352 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:04.352 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:04.352 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eElvJ8tHlH 00:17:04.352 01:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:04.611 [2024-11-17 01:37:13.005501] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:04.611 [2024-11-17 01:37:13.014236] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:04.611 [2024-11-17 01:37:13.014296] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:04.611 [2024-11-17 01:37:13.014379] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:04.611 [2024-11-17 01:37:13.015176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:04.611 [2024-11-17 01:37:13.016135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:04.611 [2024-11-17 01:37:13.017121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:04.611 [2024-11-17 01:37:13.017200] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:04.611 [2024-11-17 01:37:13.017221] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:04.611 [2024-11-17 01:37:13.017243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:04.611 request: 00:17:04.611 { 00:17:04.611 "name": "TLSTEST", 00:17:04.611 "trtype": "tcp", 00:17:04.611 "traddr": "10.0.0.3", 00:17:04.611 "adrfam": "ipv4", 00:17:04.611 "trsvcid": "4420", 00:17:04.611 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:04.611 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:04.611 "prchk_reftag": false, 00:17:04.611 "prchk_guard": false, 00:17:04.611 "hdgst": false, 00:17:04.611 "ddgst": false, 00:17:04.611 "psk": "key0", 00:17:04.611 "allow_unrecognized_csi": false, 00:17:04.611 "method": "bdev_nvme_attach_controller", 00:17:04.611 "req_id": 1 00:17:04.611 } 00:17:04.611 Got JSON-RPC error response 00:17:04.611 response: 00:17:04.611 { 00:17:04.611 "code": -5, 00:17:04.611 "message": "Input/output error" 00:17:04.611 } 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74237 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74237 ']' 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74237 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74237 00:17:04.611 killing process with pid 74237 00:17:04.611 Received shutdown signal, test time was about 10.000000 seconds 00:17:04.611 00:17:04.611 Latency(us) 00:17:04.611 [2024-11-17T01:37:13.070Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:04.611 [2024-11-17T01:37:13.070Z] =================================================================================================================== 00:17:04.611 [2024-11-17T01:37:13.070Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74237' 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74237 00:17:04.611 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74237 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eElvJ8tHlH 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eElvJ8tHlH 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.eElvJ8tHlH 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eElvJ8tHlH 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74272 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74272 /var/tmp/bdevperf.sock 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74272 ']' 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:05.548 01:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:05.548 [2024-11-17 01:37:13.993953] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:05.548 [2024-11-17 01:37:13.994124] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74272 ] 00:17:05.807 [2024-11-17 01:37:14.170677] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:05.807 [2024-11-17 01:37:14.252188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:06.066 [2024-11-17 01:37:14.397864] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:06.633 01:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:06.633 01:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:06.633 01:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eElvJ8tHlH 00:17:06.891 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:07.150 [2024-11-17 01:37:15.380525] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:07.150 [2024-11-17 01:37:15.394489] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:07.150 [2024-11-17 01:37:15.394550] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:07.150 [2024-11-17 01:37:15.394626] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:07.150 [2024-11-17 01:37:15.395147] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (107): Transport endpoint is not connected 00:17:07.150 [2024-11-17 01:37:15.396129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:17:07.150 [2024-11-17 01:37:15.397110] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:07.150 [2024-11-17 01:37:15.397167] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.3 00:17:07.150 [2024-11-17 01:37:15.397206] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:07.150 [2024-11-17 01:37:15.397226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:07.150 request: 00:17:07.150 { 00:17:07.150 "name": "TLSTEST", 00:17:07.150 "trtype": "tcp", 00:17:07.150 "traddr": "10.0.0.3", 00:17:07.150 "adrfam": "ipv4", 00:17:07.150 "trsvcid": "4420", 00:17:07.150 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:07.150 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.150 "prchk_reftag": false, 00:17:07.150 "prchk_guard": false, 00:17:07.150 "hdgst": false, 00:17:07.150 "ddgst": false, 00:17:07.150 "psk": "key0", 00:17:07.150 "allow_unrecognized_csi": false, 00:17:07.150 "method": "bdev_nvme_attach_controller", 00:17:07.150 "req_id": 1 00:17:07.150 } 00:17:07.150 Got JSON-RPC error response 00:17:07.150 response: 00:17:07.150 { 00:17:07.150 "code": -5, 00:17:07.150 "message": "Input/output error" 00:17:07.150 } 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74272 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74272 ']' 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74272 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74272 00:17:07.150 killing process with pid 74272 00:17:07.150 Received shutdown signal, test time was about 10.000000 seconds 00:17:07.150 00:17:07.150 Latency(us) 00:17:07.150 [2024-11-17T01:37:15.609Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.150 [2024-11-17T01:37:15.609Z] =================================================================================================================== 00:17:07.150 [2024-11-17T01:37:15.609Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74272' 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74272 00:17:07.150 01:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74272 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:08.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:08.086 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74310 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74310 /var/tmp/bdevperf.sock 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74310 ']' 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:08.087 01:37:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:08.087 [2024-11-17 01:37:16.293157] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:08.087 [2024-11-17 01:37:16.293335] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74310 ] 00:17:08.087 [2024-11-17 01:37:16.471549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.346 [2024-11-17 01:37:16.561064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.346 [2024-11-17 01:37:16.703497] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:08.913 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:08.913 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:08.913 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:09.190 [2024-11-17 01:37:17.398274] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:09.190 [2024-11-17 01:37:17.398341] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:09.190 request: 00:17:09.190 { 00:17:09.190 "name": "key0", 00:17:09.190 "path": "", 00:17:09.190 "method": "keyring_file_add_key", 00:17:09.190 "req_id": 1 00:17:09.190 } 00:17:09.190 Got JSON-RPC error response 00:17:09.190 response: 00:17:09.190 { 00:17:09.190 "code": -1, 00:17:09.190 "message": "Operation not permitted" 00:17:09.190 } 00:17:09.190 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:09.459 [2024-11-17 01:37:17.690510] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:09.459 [2024-11-17 01:37:17.690606] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:09.459 request: 00:17:09.459 { 00:17:09.460 "name": "TLSTEST", 00:17:09.460 "trtype": "tcp", 00:17:09.460 "traddr": "10.0.0.3", 00:17:09.460 "adrfam": "ipv4", 00:17:09.460 "trsvcid": "4420", 00:17:09.460 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:09.460 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:09.460 "prchk_reftag": false, 00:17:09.460 "prchk_guard": false, 00:17:09.460 "hdgst": false, 00:17:09.460 "ddgst": false, 00:17:09.460 "psk": "key0", 00:17:09.460 "allow_unrecognized_csi": false, 00:17:09.460 "method": "bdev_nvme_attach_controller", 00:17:09.460 "req_id": 1 00:17:09.460 } 00:17:09.460 Got JSON-RPC error response 00:17:09.460 response: 00:17:09.460 { 00:17:09.460 "code": -126, 00:17:09.460 "message": "Required key not available" 00:17:09.460 } 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74310 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74310 ']' 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74310 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74310 00:17:09.460 killing process with pid 74310 00:17:09.460 Received shutdown signal, test time was about 10.000000 seconds 00:17:09.460 00:17:09.460 Latency(us) 00:17:09.460 [2024-11-17T01:37:17.919Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:09.460 [2024-11-17T01:37:17.919Z] =================================================================================================================== 00:17:09.460 [2024-11-17T01:37:17.919Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74310' 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74310 00:17:09.460 01:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74310 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 73805 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 73805 ']' 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 73805 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73805 00:17:10.396 killing process with pid 73805 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73805' 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 73805 00:17:10.396 01:37:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 73805 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.7aOSk2rSxl 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.7aOSk2rSxl 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74373 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74373 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74373 ']' 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.331 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:11.332 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.332 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:11.332 01:37:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:11.332 [2024-11-17 01:37:19.695206] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:11.332 [2024-11-17 01:37:19.695422] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:11.591 [2024-11-17 01:37:19.875001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.591 [2024-11-17 01:37:19.959552] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:11.591 [2024-11-17 01:37:19.959660] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:11.591 [2024-11-17 01:37:19.959696] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:11.591 [2024-11-17 01:37:19.959719] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:11.591 [2024-11-17 01:37:19.959734] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:11.591 [2024-11-17 01:37:19.960994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:11.850 [2024-11-17 01:37:20.113968] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.7aOSk2rSxl 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7aOSk2rSxl 00:17:12.418 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:12.677 [2024-11-17 01:37:20.949657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:12.677 01:37:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:12.935 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:13.194 [2024-11-17 01:37:21.485849] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:13.194 [2024-11-17 01:37:21.486113] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:13.194 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:13.453 malloc0 00:17:13.453 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:13.712 01:37:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7aOSk2rSxl 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7aOSk2rSxl 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74423 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74423 /var/tmp/bdevperf.sock 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74423 ']' 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:13.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:13.980 01:37:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:14.238 [2024-11-17 01:37:22.519505] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:14.238 [2024-11-17 01:37:22.519678] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74423 ] 00:17:14.238 [2024-11-17 01:37:22.686460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:14.498 [2024-11-17 01:37:22.782833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:14.498 [2024-11-17 01:37:22.939038] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:15.434 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:15.434 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:15.434 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:15.434 01:37:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:15.694 [2024-11-17 01:37:24.008602] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:15.694 TLSTESTn1 00:17:15.694 01:37:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:15.953 Running I/O for 10 seconds... 00:17:17.827 3086.00 IOPS, 12.05 MiB/s [2024-11-17T01:37:27.221Z] 3136.00 IOPS, 12.25 MiB/s [2024-11-17T01:37:28.598Z] 3208.67 IOPS, 12.53 MiB/s [2024-11-17T01:37:29.535Z] 3232.00 IOPS, 12.62 MiB/s [2024-11-17T01:37:30.471Z] 3247.80 IOPS, 12.69 MiB/s [2024-11-17T01:37:31.407Z] 3235.33 IOPS, 12.64 MiB/s [2024-11-17T01:37:32.342Z] 3248.57 IOPS, 12.69 MiB/s [2024-11-17T01:37:33.279Z] 3257.12 IOPS, 12.72 MiB/s [2024-11-17T01:37:34.216Z] 3263.00 IOPS, 12.75 MiB/s [2024-11-17T01:37:34.475Z] 3269.40 IOPS, 12.77 MiB/s 00:17:26.016 Latency(us) 00:17:26.016 [2024-11-17T01:37:34.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.016 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:26.016 Verification LBA range: start 0x0 length 0x2000 00:17:26.016 TLSTESTn1 : 10.02 3274.06 12.79 0.00 0.00 39018.31 9592.09 29074.15 00:17:26.016 [2024-11-17T01:37:34.475Z] =================================================================================================================== 00:17:26.016 [2024-11-17T01:37:34.475Z] Total : 3274.06 12.79 0.00 0.00 39018.31 9592.09 29074.15 00:17:26.016 { 00:17:26.016 "results": [ 00:17:26.016 { 00:17:26.016 "job": "TLSTESTn1", 00:17:26.016 "core_mask": "0x4", 00:17:26.016 "workload": "verify", 00:17:26.016 "status": "finished", 00:17:26.016 "verify_range": { 00:17:26.016 "start": 0, 00:17:26.016 "length": 8192 00:17:26.016 }, 00:17:26.016 "queue_depth": 128, 00:17:26.016 "io_size": 4096, 00:17:26.016 "runtime": 10.024568, 00:17:26.016 "iops": 3274.05629848588, 00:17:26.016 "mibps": 12.789282415960468, 00:17:26.016 "io_failed": 0, 00:17:26.016 "io_timeout": 0, 00:17:26.016 "avg_latency_us": 39018.305104215426, 00:17:26.016 "min_latency_us": 9592.087272727273, 00:17:26.016 "max_latency_us": 29074.15272727273 00:17:26.016 } 00:17:26.016 ], 00:17:26.016 "core_count": 1 00:17:26.016 } 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 74423 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74423 ']' 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74423 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74423 00:17:26.016 killing process with pid 74423 00:17:26.016 Received shutdown signal, test time was about 10.000000 seconds 00:17:26.016 00:17:26.016 Latency(us) 00:17:26.016 [2024-11-17T01:37:34.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:26.016 [2024-11-17T01:37:34.475Z] =================================================================================================================== 00:17:26.016 [2024-11-17T01:37:34.475Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74423' 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74423 00:17:26.016 01:37:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74423 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.7aOSk2rSxl 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7aOSk2rSxl 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7aOSk2rSxl 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.7aOSk2rSxl 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.7aOSk2rSxl 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=74571 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:26.954 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 74571 /var/tmp/bdevperf.sock 00:17:26.955 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74571 ']' 00:17:26.955 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:26.955 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:26.955 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:26.955 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.955 01:37:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:26.955 [2024-11-17 01:37:35.208750] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:26.955 [2024-11-17 01:37:35.208931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74571 ] 00:17:26.955 [2024-11-17 01:37:35.379774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.214 [2024-11-17 01:37:35.469926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:27.214 [2024-11-17 01:37:35.635330] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:27.782 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.782 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:27.782 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:28.041 [2024-11-17 01:37:36.349920] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7aOSk2rSxl': 0100666 00:17:28.041 [2024-11-17 01:37:36.349975] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:28.041 request: 00:17:28.041 { 00:17:28.041 "name": "key0", 00:17:28.041 "path": "/tmp/tmp.7aOSk2rSxl", 00:17:28.041 "method": "keyring_file_add_key", 00:17:28.041 "req_id": 1 00:17:28.041 } 00:17:28.041 Got JSON-RPC error response 00:17:28.041 response: 00:17:28.041 { 00:17:28.041 "code": -1, 00:17:28.041 "message": "Operation not permitted" 00:17:28.041 } 00:17:28.041 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:28.300 [2024-11-17 01:37:36.630091] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:28.300 [2024-11-17 01:37:36.630176] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:28.300 request: 00:17:28.300 { 00:17:28.300 "name": "TLSTEST", 00:17:28.300 "trtype": "tcp", 00:17:28.300 "traddr": "10.0.0.3", 00:17:28.300 "adrfam": "ipv4", 00:17:28.300 "trsvcid": "4420", 00:17:28.300 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:28.300 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:28.301 "prchk_reftag": false, 00:17:28.301 "prchk_guard": false, 00:17:28.301 "hdgst": false, 00:17:28.301 "ddgst": false, 00:17:28.301 "psk": "key0", 00:17:28.301 "allow_unrecognized_csi": false, 00:17:28.301 "method": "bdev_nvme_attach_controller", 00:17:28.301 "req_id": 1 00:17:28.301 } 00:17:28.301 Got JSON-RPC error response 00:17:28.301 response: 00:17:28.301 { 00:17:28.301 "code": -126, 00:17:28.301 "message": "Required key not available" 00:17:28.301 } 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 74571 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74571 ']' 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74571 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74571 00:17:28.301 killing process with pid 74571 00:17:28.301 Received shutdown signal, test time was about 10.000000 seconds 00:17:28.301 00:17:28.301 Latency(us) 00:17:28.301 [2024-11-17T01:37:36.760Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:28.301 [2024-11-17T01:37:36.760Z] =================================================================================================================== 00:17:28.301 [2024-11-17T01:37:36.760Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74571' 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74571 00:17:28.301 01:37:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74571 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 74373 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74373 ']' 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74373 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74373 00:17:29.256 killing process with pid 74373 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74373' 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74373 00:17:29.256 01:37:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74373 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74623 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74623 00:17:30.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74623 ']' 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.221 01:37:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.221 [2024-11-17 01:37:38.621573] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:30.221 [2024-11-17 01:37:38.621729] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.480 [2024-11-17 01:37:38.790144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.480 [2024-11-17 01:37:38.880985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.480 [2024-11-17 01:37:38.881051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.480 [2024-11-17 01:37:38.881086] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.480 [2024-11-17 01:37:38.881108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.480 [2024-11-17 01:37:38.881121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.480 [2024-11-17 01:37:38.882198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.740 [2024-11-17 01:37:39.049198] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.7aOSk2rSxl 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.7aOSk2rSxl 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.7aOSk2rSxl 00:17:31.307 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7aOSk2rSxl 00:17:31.308 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:31.565 [2024-11-17 01:37:39.857268] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:31.565 01:37:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:31.824 01:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:32.083 [2024-11-17 01:37:40.381479] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:32.083 [2024-11-17 01:37:40.381792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:32.083 01:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:32.341 malloc0 00:17:32.341 01:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:32.601 01:37:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:32.860 [2024-11-17 01:37:41.155493] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.7aOSk2rSxl': 0100666 00:17:32.860 [2024-11-17 01:37:41.155560] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:32.860 request: 00:17:32.860 { 00:17:32.860 "name": "key0", 00:17:32.860 "path": "/tmp/tmp.7aOSk2rSxl", 00:17:32.860 "method": "keyring_file_add_key", 00:17:32.860 "req_id": 1 00:17:32.860 } 00:17:32.860 Got JSON-RPC error response 00:17:32.860 response: 00:17:32.860 { 00:17:32.860 "code": -1, 00:17:32.860 "message": "Operation not permitted" 00:17:32.860 } 00:17:32.860 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:33.120 [2024-11-17 01:37:41.391669] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:17:33.120 [2024-11-17 01:37:41.391759] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:17:33.120 request: 00:17:33.120 { 00:17:33.120 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:33.120 "host": "nqn.2016-06.io.spdk:host1", 00:17:33.120 "psk": "key0", 00:17:33.120 "method": "nvmf_subsystem_add_host", 00:17:33.120 "req_id": 1 00:17:33.120 } 00:17:33.120 Got JSON-RPC error response 00:17:33.120 response: 00:17:33.120 { 00:17:33.120 "code": -32603, 00:17:33.120 "message": "Internal error" 00:17:33.120 } 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 74623 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74623 ']' 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74623 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74623 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:33.120 killing process with pid 74623 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74623' 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74623 00:17:33.120 01:37:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74623 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.7aOSk2rSxl 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74699 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74699 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74699 ']' 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:34.058 01:37:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:34.058 [2024-11-17 01:37:42.482336] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:34.058 [2024-11-17 01:37:42.482510] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.318 [2024-11-17 01:37:42.660639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.318 [2024-11-17 01:37:42.743290] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.318 [2024-11-17 01:37:42.743351] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.318 [2024-11-17 01:37:42.743385] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.318 [2024-11-17 01:37:42.743406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.318 [2024-11-17 01:37:42.743418] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.318 [2024-11-17 01:37:42.744672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.578 [2024-11-17 01:37:42.900772] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:35.145 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.7aOSk2rSxl 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7aOSk2rSxl 00:17:35.146 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:35.404 [2024-11-17 01:37:43.694945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.404 01:37:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:35.663 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:35.922 [2024-11-17 01:37:44.271499] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:35.922 [2024-11-17 01:37:44.271898] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:35.922 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:36.181 malloc0 00:17:36.181 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:36.440 01:37:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:36.699 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:36.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=74760 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 74760 /var/tmp/bdevperf.sock 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74760 ']' 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:36.959 01:37:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:36.959 [2024-11-17 01:37:45.391413] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:36.959 [2024-11-17 01:37:45.392215] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74760 ] 00:17:37.218 [2024-11-17 01:37:45.578942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:37.477 [2024-11-17 01:37:45.698628] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:37.477 [2024-11-17 01:37:45.853157] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:38.043 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:38.043 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:38.043 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:38.043 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:38.302 [2024-11-17 01:37:46.684774] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:38.561 TLSTESTn1 00:17:38.561 01:37:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:17:38.820 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:17:38.820 "subsystems": [ 00:17:38.820 { 00:17:38.820 "subsystem": "keyring", 00:17:38.820 "config": [ 00:17:38.820 { 00:17:38.820 "method": "keyring_file_add_key", 00:17:38.820 "params": { 00:17:38.820 "name": "key0", 00:17:38.820 "path": "/tmp/tmp.7aOSk2rSxl" 00:17:38.820 } 00:17:38.820 } 00:17:38.820 ] 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "subsystem": "iobuf", 00:17:38.820 "config": [ 00:17:38.820 { 00:17:38.820 "method": "iobuf_set_options", 00:17:38.820 "params": { 00:17:38.820 "small_pool_count": 8192, 00:17:38.820 "large_pool_count": 1024, 00:17:38.820 "small_bufsize": 8192, 00:17:38.820 "large_bufsize": 135168, 00:17:38.820 "enable_numa": false 00:17:38.820 } 00:17:38.820 } 00:17:38.820 ] 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "subsystem": "sock", 00:17:38.820 "config": [ 00:17:38.820 { 00:17:38.820 "method": "sock_set_default_impl", 00:17:38.820 "params": { 00:17:38.820 "impl_name": "uring" 00:17:38.820 } 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "method": "sock_impl_set_options", 00:17:38.820 "params": { 00:17:38.820 "impl_name": "ssl", 00:17:38.820 "recv_buf_size": 4096, 00:17:38.820 "send_buf_size": 4096, 00:17:38.820 "enable_recv_pipe": true, 00:17:38.820 "enable_quickack": false, 00:17:38.820 "enable_placement_id": 0, 00:17:38.820 "enable_zerocopy_send_server": true, 00:17:38.820 "enable_zerocopy_send_client": false, 00:17:38.820 "zerocopy_threshold": 0, 00:17:38.820 "tls_version": 0, 00:17:38.820 "enable_ktls": false 00:17:38.820 } 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "method": "sock_impl_set_options", 00:17:38.820 "params": { 00:17:38.820 "impl_name": "posix", 00:17:38.820 "recv_buf_size": 2097152, 00:17:38.820 "send_buf_size": 2097152, 00:17:38.820 "enable_recv_pipe": true, 00:17:38.820 "enable_quickack": false, 00:17:38.820 "enable_placement_id": 0, 00:17:38.820 "enable_zerocopy_send_server": true, 00:17:38.820 "enable_zerocopy_send_client": false, 00:17:38.820 "zerocopy_threshold": 0, 00:17:38.820 "tls_version": 0, 00:17:38.820 "enable_ktls": false 00:17:38.820 } 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "method": "sock_impl_set_options", 00:17:38.820 "params": { 00:17:38.820 "impl_name": "uring", 00:17:38.820 "recv_buf_size": 2097152, 00:17:38.820 "send_buf_size": 2097152, 00:17:38.820 "enable_recv_pipe": true, 00:17:38.820 "enable_quickack": false, 00:17:38.820 "enable_placement_id": 0, 00:17:38.820 "enable_zerocopy_send_server": false, 00:17:38.820 "enable_zerocopy_send_client": false, 00:17:38.820 "zerocopy_threshold": 0, 00:17:38.820 "tls_version": 0, 00:17:38.820 "enable_ktls": false 00:17:38.820 } 00:17:38.820 } 00:17:38.820 ] 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "subsystem": "vmd", 00:17:38.820 "config": [] 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "subsystem": "accel", 00:17:38.820 "config": [ 00:17:38.820 { 00:17:38.820 "method": "accel_set_options", 00:17:38.820 "params": { 00:17:38.820 "small_cache_size": 128, 00:17:38.820 "large_cache_size": 16, 00:17:38.820 "task_count": 2048, 00:17:38.820 "sequence_count": 2048, 00:17:38.820 "buf_count": 2048 00:17:38.820 } 00:17:38.820 } 00:17:38.820 ] 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "subsystem": "bdev", 00:17:38.820 "config": [ 00:17:38.820 { 00:17:38.820 "method": "bdev_set_options", 00:17:38.820 "params": { 00:17:38.820 "bdev_io_pool_size": 65535, 00:17:38.820 "bdev_io_cache_size": 256, 00:17:38.820 "bdev_auto_examine": true, 00:17:38.820 "iobuf_small_cache_size": 128, 00:17:38.820 "iobuf_large_cache_size": 16 00:17:38.820 } 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "method": "bdev_raid_set_options", 00:17:38.820 "params": { 00:17:38.820 "process_window_size_kb": 1024, 00:17:38.820 "process_max_bandwidth_mb_sec": 0 00:17:38.820 } 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "method": "bdev_iscsi_set_options", 00:17:38.820 "params": { 00:17:38.820 "timeout_sec": 30 00:17:38.820 } 00:17:38.820 }, 00:17:38.820 { 00:17:38.820 "method": "bdev_nvme_set_options", 00:17:38.820 "params": { 00:17:38.820 "action_on_timeout": "none", 00:17:38.820 "timeout_us": 0, 00:17:38.820 "timeout_admin_us": 0, 00:17:38.821 "keep_alive_timeout_ms": 10000, 00:17:38.821 "arbitration_burst": 0, 00:17:38.821 "low_priority_weight": 0, 00:17:38.821 "medium_priority_weight": 0, 00:17:38.821 "high_priority_weight": 0, 00:17:38.821 "nvme_adminq_poll_period_us": 10000, 00:17:38.821 "nvme_ioq_poll_period_us": 0, 00:17:38.821 "io_queue_requests": 0, 00:17:38.821 "delay_cmd_submit": true, 00:17:38.821 "transport_retry_count": 4, 00:17:38.821 "bdev_retry_count": 3, 00:17:38.821 "transport_ack_timeout": 0, 00:17:38.821 "ctrlr_loss_timeout_sec": 0, 00:17:38.821 "reconnect_delay_sec": 0, 00:17:38.821 "fast_io_fail_timeout_sec": 0, 00:17:38.821 "disable_auto_failback": false, 00:17:38.821 "generate_uuids": false, 00:17:38.821 "transport_tos": 0, 00:17:38.821 "nvme_error_stat": false, 00:17:38.821 "rdma_srq_size": 0, 00:17:38.821 "io_path_stat": false, 00:17:38.821 "allow_accel_sequence": false, 00:17:38.821 "rdma_max_cq_size": 0, 00:17:38.821 "rdma_cm_event_timeout_ms": 0, 00:17:38.821 "dhchap_digests": [ 00:17:38.821 "sha256", 00:17:38.821 "sha384", 00:17:38.821 "sha512" 00:17:38.821 ], 00:17:38.821 "dhchap_dhgroups": [ 00:17:38.821 "null", 00:17:38.821 "ffdhe2048", 00:17:38.821 "ffdhe3072", 00:17:38.821 "ffdhe4096", 00:17:38.821 "ffdhe6144", 00:17:38.821 "ffdhe8192" 00:17:38.821 ] 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "bdev_nvme_set_hotplug", 00:17:38.821 "params": { 00:17:38.821 "period_us": 100000, 00:17:38.821 "enable": false 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "bdev_malloc_create", 00:17:38.821 "params": { 00:17:38.821 "name": "malloc0", 00:17:38.821 "num_blocks": 8192, 00:17:38.821 "block_size": 4096, 00:17:38.821 "physical_block_size": 4096, 00:17:38.821 "uuid": "fd0d4764-a3f2-4e21-90d4-4a7ef29eab1d", 00:17:38.821 "optimal_io_boundary": 0, 00:17:38.821 "md_size": 0, 00:17:38.821 "dif_type": 0, 00:17:38.821 "dif_is_head_of_md": false, 00:17:38.821 "dif_pi_format": 0 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "bdev_wait_for_examine" 00:17:38.821 } 00:17:38.821 ] 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "subsystem": "nbd", 00:17:38.821 "config": [] 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "subsystem": "scheduler", 00:17:38.821 "config": [ 00:17:38.821 { 00:17:38.821 "method": "framework_set_scheduler", 00:17:38.821 "params": { 00:17:38.821 "name": "static" 00:17:38.821 } 00:17:38.821 } 00:17:38.821 ] 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "subsystem": "nvmf", 00:17:38.821 "config": [ 00:17:38.821 { 00:17:38.821 "method": "nvmf_set_config", 00:17:38.821 "params": { 00:17:38.821 "discovery_filter": "match_any", 00:17:38.821 "admin_cmd_passthru": { 00:17:38.821 "identify_ctrlr": false 00:17:38.821 }, 00:17:38.821 "dhchap_digests": [ 00:17:38.821 "sha256", 00:17:38.821 "sha384", 00:17:38.821 "sha512" 00:17:38.821 ], 00:17:38.821 "dhchap_dhgroups": [ 00:17:38.821 "null", 00:17:38.821 "ffdhe2048", 00:17:38.821 "ffdhe3072", 00:17:38.821 "ffdhe4096", 00:17:38.821 "ffdhe6144", 00:17:38.821 "ffdhe8192" 00:17:38.821 ] 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_set_max_subsystems", 00:17:38.821 "params": { 00:17:38.821 "max_subsystems": 1024 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_set_crdt", 00:17:38.821 "params": { 00:17:38.821 "crdt1": 0, 00:17:38.821 "crdt2": 0, 00:17:38.821 "crdt3": 0 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_create_transport", 00:17:38.821 "params": { 00:17:38.821 "trtype": "TCP", 00:17:38.821 "max_queue_depth": 128, 00:17:38.821 "max_io_qpairs_per_ctrlr": 127, 00:17:38.821 "in_capsule_data_size": 4096, 00:17:38.821 "max_io_size": 131072, 00:17:38.821 "io_unit_size": 131072, 00:17:38.821 "max_aq_depth": 128, 00:17:38.821 "num_shared_buffers": 511, 00:17:38.821 "buf_cache_size": 4294967295, 00:17:38.821 "dif_insert_or_strip": false, 00:17:38.821 "zcopy": false, 00:17:38.821 "c2h_success": false, 00:17:38.821 "sock_priority": 0, 00:17:38.821 "abort_timeout_sec": 1, 00:17:38.821 "ack_timeout": 0, 00:17:38.821 "data_wr_pool_size": 0 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_create_subsystem", 00:17:38.821 "params": { 00:17:38.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.821 "allow_any_host": false, 00:17:38.821 "serial_number": "SPDK00000000000001", 00:17:38.821 "model_number": "SPDK bdev Controller", 00:17:38.821 "max_namespaces": 10, 00:17:38.821 "min_cntlid": 1, 00:17:38.821 "max_cntlid": 65519, 00:17:38.821 "ana_reporting": false 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_subsystem_add_host", 00:17:38.821 "params": { 00:17:38.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.821 "host": "nqn.2016-06.io.spdk:host1", 00:17:38.821 "psk": "key0" 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_subsystem_add_ns", 00:17:38.821 "params": { 00:17:38.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.821 "namespace": { 00:17:38.821 "nsid": 1, 00:17:38.821 "bdev_name": "malloc0", 00:17:38.821 "nguid": "FD0D4764A3F24E2190D44A7EF29EAB1D", 00:17:38.821 "uuid": "fd0d4764-a3f2-4e21-90d4-4a7ef29eab1d", 00:17:38.821 "no_auto_visible": false 00:17:38.821 } 00:17:38.821 } 00:17:38.821 }, 00:17:38.821 { 00:17:38.821 "method": "nvmf_subsystem_add_listener", 00:17:38.821 "params": { 00:17:38.821 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:38.821 "listen_address": { 00:17:38.821 "trtype": "TCP", 00:17:38.821 "adrfam": "IPv4", 00:17:38.821 "traddr": "10.0.0.3", 00:17:38.821 "trsvcid": "4420" 00:17:38.821 }, 00:17:38.821 "secure_channel": true 00:17:38.821 } 00:17:38.821 } 00:17:38.821 ] 00:17:38.821 } 00:17:38.821 ] 00:17:38.821 }' 00:17:38.821 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:17:39.092 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:17:39.092 "subsystems": [ 00:17:39.092 { 00:17:39.092 "subsystem": "keyring", 00:17:39.092 "config": [ 00:17:39.092 { 00:17:39.092 "method": "keyring_file_add_key", 00:17:39.092 "params": { 00:17:39.092 "name": "key0", 00:17:39.092 "path": "/tmp/tmp.7aOSk2rSxl" 00:17:39.092 } 00:17:39.092 } 00:17:39.092 ] 00:17:39.092 }, 00:17:39.092 { 00:17:39.092 "subsystem": "iobuf", 00:17:39.092 "config": [ 00:17:39.092 { 00:17:39.092 "method": "iobuf_set_options", 00:17:39.092 "params": { 00:17:39.092 "small_pool_count": 8192, 00:17:39.092 "large_pool_count": 1024, 00:17:39.092 "small_bufsize": 8192, 00:17:39.092 "large_bufsize": 135168, 00:17:39.092 "enable_numa": false 00:17:39.092 } 00:17:39.092 } 00:17:39.092 ] 00:17:39.092 }, 00:17:39.092 { 00:17:39.092 "subsystem": "sock", 00:17:39.092 "config": [ 00:17:39.092 { 00:17:39.092 "method": "sock_set_default_impl", 00:17:39.093 "params": { 00:17:39.093 "impl_name": "uring" 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "sock_impl_set_options", 00:17:39.093 "params": { 00:17:39.093 "impl_name": "ssl", 00:17:39.093 "recv_buf_size": 4096, 00:17:39.093 "send_buf_size": 4096, 00:17:39.093 "enable_recv_pipe": true, 00:17:39.093 "enable_quickack": false, 00:17:39.093 "enable_placement_id": 0, 00:17:39.093 "enable_zerocopy_send_server": true, 00:17:39.093 "enable_zerocopy_send_client": false, 00:17:39.093 "zerocopy_threshold": 0, 00:17:39.093 "tls_version": 0, 00:17:39.093 "enable_ktls": false 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "sock_impl_set_options", 00:17:39.093 "params": { 00:17:39.093 "impl_name": "posix", 00:17:39.093 "recv_buf_size": 2097152, 00:17:39.093 "send_buf_size": 2097152, 00:17:39.093 "enable_recv_pipe": true, 00:17:39.093 "enable_quickack": false, 00:17:39.093 "enable_placement_id": 0, 00:17:39.093 "enable_zerocopy_send_server": true, 00:17:39.093 "enable_zerocopy_send_client": false, 00:17:39.093 "zerocopy_threshold": 0, 00:17:39.093 "tls_version": 0, 00:17:39.093 "enable_ktls": false 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "sock_impl_set_options", 00:17:39.093 "params": { 00:17:39.093 "impl_name": "uring", 00:17:39.093 "recv_buf_size": 2097152, 00:17:39.093 "send_buf_size": 2097152, 00:17:39.093 "enable_recv_pipe": true, 00:17:39.093 "enable_quickack": false, 00:17:39.093 "enable_placement_id": 0, 00:17:39.093 "enable_zerocopy_send_server": false, 00:17:39.093 "enable_zerocopy_send_client": false, 00:17:39.093 "zerocopy_threshold": 0, 00:17:39.093 "tls_version": 0, 00:17:39.093 "enable_ktls": false 00:17:39.093 } 00:17:39.093 } 00:17:39.093 ] 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "subsystem": "vmd", 00:17:39.093 "config": [] 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "subsystem": "accel", 00:17:39.093 "config": [ 00:17:39.093 { 00:17:39.093 "method": "accel_set_options", 00:17:39.093 "params": { 00:17:39.093 "small_cache_size": 128, 00:17:39.093 "large_cache_size": 16, 00:17:39.093 "task_count": 2048, 00:17:39.093 "sequence_count": 2048, 00:17:39.093 "buf_count": 2048 00:17:39.093 } 00:17:39.093 } 00:17:39.093 ] 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "subsystem": "bdev", 00:17:39.093 "config": [ 00:17:39.093 { 00:17:39.093 "method": "bdev_set_options", 00:17:39.093 "params": { 00:17:39.093 "bdev_io_pool_size": 65535, 00:17:39.093 "bdev_io_cache_size": 256, 00:17:39.093 "bdev_auto_examine": true, 00:17:39.093 "iobuf_small_cache_size": 128, 00:17:39.093 "iobuf_large_cache_size": 16 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "bdev_raid_set_options", 00:17:39.093 "params": { 00:17:39.093 "process_window_size_kb": 1024, 00:17:39.093 "process_max_bandwidth_mb_sec": 0 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "bdev_iscsi_set_options", 00:17:39.093 "params": { 00:17:39.093 "timeout_sec": 30 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "bdev_nvme_set_options", 00:17:39.093 "params": { 00:17:39.093 "action_on_timeout": "none", 00:17:39.093 "timeout_us": 0, 00:17:39.093 "timeout_admin_us": 0, 00:17:39.093 "keep_alive_timeout_ms": 10000, 00:17:39.093 "arbitration_burst": 0, 00:17:39.093 "low_priority_weight": 0, 00:17:39.093 "medium_priority_weight": 0, 00:17:39.093 "high_priority_weight": 0, 00:17:39.093 "nvme_adminq_poll_period_us": 10000, 00:17:39.093 "nvme_ioq_poll_period_us": 0, 00:17:39.093 "io_queue_requests": 512, 00:17:39.093 "delay_cmd_submit": true, 00:17:39.093 "transport_retry_count": 4, 00:17:39.093 "bdev_retry_count": 3, 00:17:39.093 "transport_ack_timeout": 0, 00:17:39.093 "ctrlr_loss_timeout_sec": 0, 00:17:39.093 "reconnect_delay_sec": 0, 00:17:39.093 "fast_io_fail_timeout_sec": 0, 00:17:39.093 "disable_auto_failback": false, 00:17:39.093 "generate_uuids": false, 00:17:39.093 "transport_tos": 0, 00:17:39.093 "nvme_error_stat": false, 00:17:39.093 "rdma_srq_size": 0, 00:17:39.093 "io_path_stat": false, 00:17:39.093 "allow_accel_sequence": false, 00:17:39.093 "rdma_max_cq_size": 0, 00:17:39.093 "rdma_cm_event_timeout_ms": 0, 00:17:39.093 "dhchap_digests": [ 00:17:39.093 "sha256", 00:17:39.093 "sha384", 00:17:39.093 "sha512" 00:17:39.093 ], 00:17:39.093 "dhchap_dhgroups": [ 00:17:39.093 "null", 00:17:39.093 "ffdhe2048", 00:17:39.093 "ffdhe3072", 00:17:39.093 "ffdhe4096", 00:17:39.093 "ffdhe6144", 00:17:39.093 "ffdhe8192" 00:17:39.093 ] 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "bdev_nvme_attach_controller", 00:17:39.093 "params": { 00:17:39.093 "name": "TLSTEST", 00:17:39.093 "trtype": "TCP", 00:17:39.093 "adrfam": "IPv4", 00:17:39.093 "traddr": "10.0.0.3", 00:17:39.093 "trsvcid": "4420", 00:17:39.093 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:39.093 "prchk_reftag": false, 00:17:39.093 "prchk_guard": false, 00:17:39.093 "ctrlr_loss_timeout_sec": 0, 00:17:39.093 "reconnect_delay_sec": 0, 00:17:39.093 "fast_io_fail_timeout_sec": 0, 00:17:39.093 "psk": "key0", 00:17:39.093 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:39.093 "hdgst": false, 00:17:39.093 "ddgst": false, 00:17:39.093 "multipath": "multipath" 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "bdev_nvme_set_hotplug", 00:17:39.093 "params": { 00:17:39.093 "period_us": 100000, 00:17:39.093 "enable": false 00:17:39.093 } 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "method": "bdev_wait_for_examine" 00:17:39.093 } 00:17:39.093 ] 00:17:39.093 }, 00:17:39.093 { 00:17:39.093 "subsystem": "nbd", 00:17:39.093 "config": [] 00:17:39.093 } 00:17:39.093 ] 00:17:39.093 }' 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 74760 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74760 ']' 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74760 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74760 00:17:39.093 killing process with pid 74760 00:17:39.093 Received shutdown signal, test time was about 10.000000 seconds 00:17:39.093 00:17:39.093 Latency(us) 00:17:39.093 [2024-11-17T01:37:47.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:39.093 [2024-11-17T01:37:47.552Z] =================================================================================================================== 00:17:39.093 [2024-11-17T01:37:47.552Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74760' 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74760 00:17:39.093 01:37:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74760 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 74699 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74699 ']' 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74699 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74699 00:17:40.046 killing process with pid 74699 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74699' 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74699 00:17:40.046 01:37:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74699 00:17:40.982 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:17:40.982 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:40.982 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:40.982 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:17:40.982 "subsystems": [ 00:17:40.982 { 00:17:40.982 "subsystem": "keyring", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "keyring_file_add_key", 00:17:40.982 "params": { 00:17:40.982 "name": "key0", 00:17:40.982 "path": "/tmp/tmp.7aOSk2rSxl" 00:17:40.982 } 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "iobuf", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "iobuf_set_options", 00:17:40.982 "params": { 00:17:40.982 "small_pool_count": 8192, 00:17:40.982 "large_pool_count": 1024, 00:17:40.982 "small_bufsize": 8192, 00:17:40.982 "large_bufsize": 135168, 00:17:40.982 "enable_numa": false 00:17:40.982 } 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "sock", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "sock_set_default_impl", 00:17:40.982 "params": { 00:17:40.982 "impl_name": "uring" 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "sock_impl_set_options", 00:17:40.982 "params": { 00:17:40.982 "impl_name": "ssl", 00:17:40.982 "recv_buf_size": 4096, 00:17:40.982 "send_buf_size": 4096, 00:17:40.982 "enable_recv_pipe": true, 00:17:40.982 "enable_quickack": false, 00:17:40.982 "enable_placement_id": 0, 00:17:40.982 "enable_zerocopy_send_server": true, 00:17:40.982 "enable_zerocopy_send_client": false, 00:17:40.982 "zerocopy_threshold": 0, 00:17:40.982 "tls_version": 0, 00:17:40.982 "enable_ktls": false 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "sock_impl_set_options", 00:17:40.982 "params": { 00:17:40.982 "impl_name": "posix", 00:17:40.982 "recv_buf_size": 2097152, 00:17:40.982 "send_buf_size": 2097152, 00:17:40.982 "enable_recv_pipe": true, 00:17:40.982 "enable_quickack": false, 00:17:40.982 "enable_placement_id": 0, 00:17:40.982 "enable_zerocopy_send_server": true, 00:17:40.982 "enable_zerocopy_send_client": false, 00:17:40.982 "zerocopy_threshold": 0, 00:17:40.982 "tls_version": 0, 00:17:40.982 "enable_ktls": false 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "sock_impl_set_options", 00:17:40.982 "params": { 00:17:40.982 "impl_name": "uring", 00:17:40.982 "recv_buf_size": 2097152, 00:17:40.982 "send_buf_size": 2097152, 00:17:40.982 "enable_recv_pipe": true, 00:17:40.982 "enable_quickack": false, 00:17:40.982 "enable_placement_id": 0, 00:17:40.982 "enable_zerocopy_send_server": false, 00:17:40.982 "enable_zerocopy_send_client": false, 00:17:40.982 "zerocopy_threshold": 0, 00:17:40.982 "tls_version": 0, 00:17:40.982 "enable_ktls": false 00:17:40.982 } 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "vmd", 00:17:40.982 "config": [] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "accel", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "accel_set_options", 00:17:40.982 "params": { 00:17:40.982 "small_cache_size": 128, 00:17:40.982 "large_cache_size": 16, 00:17:40.982 "task_count": 2048, 00:17:40.982 "sequence_count": 2048, 00:17:40.982 "buf_count": 2048 00:17:40.982 } 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "bdev", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "bdev_set_options", 00:17:40.982 "params": { 00:17:40.982 "bdev_io_pool_size": 65535, 00:17:40.982 "bdev_io_cache_size": 256, 00:17:40.982 "bdev_auto_examine": true, 00:17:40.982 "iobuf_small_cache_size": 128, 00:17:40.982 "iobuf_large_cache_size": 16 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "bdev_raid_set_options", 00:17:40.982 "params": { 00:17:40.982 "process_window_size_kb": 1024, 00:17:40.982 "process_max_bandwidth_mb_sec": 0 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "bdev_iscsi_set_options", 00:17:40.982 "params": { 00:17:40.982 "timeout_sec": 30 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "bdev_nvme_set_options", 00:17:40.982 "params": { 00:17:40.982 "action_on_timeout": "none", 00:17:40.982 "timeout_us": 0, 00:17:40.982 "timeout_admin_us": 0, 00:17:40.982 "keep_alive_timeout_ms": 10000, 00:17:40.982 "arbitration_burst": 0, 00:17:40.982 "low_priority_weight": 0, 00:17:40.982 "medium_priority_weight": 0, 00:17:40.982 "high_priority_weight": 0, 00:17:40.982 "nvme_adminq_poll_period_us": 10000, 00:17:40.982 "nvme_ioq_poll_period_us": 0, 00:17:40.982 "io_queue_requests": 0, 00:17:40.982 "delay_cmd_submit": true, 00:17:40.982 "transport_retry_count": 4, 00:17:40.982 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.982 "bdev_retry_count": 3, 00:17:40.982 "transport_ack_timeout": 0, 00:17:40.982 "ctrlr_loss_timeout_sec": 0, 00:17:40.982 "reconnect_delay_sec": 0, 00:17:40.982 "fast_io_fail_timeout_sec": 0, 00:17:40.982 "disable_auto_failback": false, 00:17:40.982 "generate_uuids": false, 00:17:40.982 "transport_tos": 0, 00:17:40.982 "nvme_error_stat": false, 00:17:40.982 "rdma_srq_size": 0, 00:17:40.982 "io_path_stat": false, 00:17:40.982 "allow_accel_sequence": false, 00:17:40.982 "rdma_max_cq_size": 0, 00:17:40.982 "rdma_cm_event_timeout_ms": 0, 00:17:40.982 "dhchap_digests": [ 00:17:40.982 "sha256", 00:17:40.982 "sha384", 00:17:40.982 "sha512" 00:17:40.982 ], 00:17:40.982 "dhchap_dhgroups": [ 00:17:40.982 "null", 00:17:40.982 "ffdhe2048", 00:17:40.982 "ffdhe3072", 00:17:40.982 "ffdhe4096", 00:17:40.982 "ffdhe6144", 00:17:40.982 "ffdhe8192" 00:17:40.982 ] 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "bdev_nvme_set_hotplug", 00:17:40.982 "params": { 00:17:40.982 "period_us": 100000, 00:17:40.982 "enable": false 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "bdev_malloc_create", 00:17:40.982 "params": { 00:17:40.982 "name": "malloc0", 00:17:40.982 "num_blocks": 8192, 00:17:40.982 "block_size": 4096, 00:17:40.982 "physical_block_size": 4096, 00:17:40.982 "uuid": "fd0d4764-a3f2-4e21-90d4-4a7ef29eab1d", 00:17:40.982 "optimal_io_boundary": 0, 00:17:40.982 "md_size": 0, 00:17:40.982 "dif_type": 0, 00:17:40.982 "dif_is_head_of_md": false, 00:17:40.982 "dif_pi_format": 0 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "bdev_wait_for_examine" 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "nbd", 00:17:40.982 "config": [] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "scheduler", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "framework_set_scheduler", 00:17:40.982 "params": { 00:17:40.982 "name": "static" 00:17:40.982 } 00:17:40.982 } 00:17:40.982 ] 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "subsystem": "nvmf", 00:17:40.982 "config": [ 00:17:40.982 { 00:17:40.982 "method": "nvmf_set_config", 00:17:40.982 "params": { 00:17:40.982 "discovery_filter": "match_any", 00:17:40.982 "admin_cmd_passthru": { 00:17:40.982 "identify_ctrlr": false 00:17:40.982 }, 00:17:40.982 "dhchap_digests": [ 00:17:40.982 "sha256", 00:17:40.982 "sha384", 00:17:40.982 "sha512" 00:17:40.982 ], 00:17:40.982 "dhchap_dhgroups": [ 00:17:40.982 "null", 00:17:40.982 "ffdhe2048", 00:17:40.982 "ffdhe3072", 00:17:40.982 "ffdhe4096", 00:17:40.982 "ffdhe6144", 00:17:40.982 "ffdhe8192" 00:17:40.982 ] 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_set_max_subsystems", 00:17:40.982 "params": { 00:17:40.982 "max_subsystems": 1024 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_set_crdt", 00:17:40.982 "params": { 00:17:40.982 "crdt1": 0, 00:17:40.982 "crdt2": 0, 00:17:40.982 "crdt3": 0 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_create_transport", 00:17:40.982 "params": { 00:17:40.982 "trtype": "TCP", 00:17:40.982 "max_queue_depth": 128, 00:17:40.982 "max_io_qpairs_per_ctrlr": 127, 00:17:40.982 "in_capsule_data_size": 4096, 00:17:40.982 "max_io_size": 131072, 00:17:40.982 "io_unit_size": 131072, 00:17:40.982 "max_aq_depth": 128, 00:17:40.982 "num_shared_buffers": 511, 00:17:40.982 "buf_cache_size": 4294967295, 00:17:40.982 "dif_insert_or_strip": false, 00:17:40.982 "zcopy": false, 00:17:40.982 "c2h_success": false, 00:17:40.982 "sock_priority": 0, 00:17:40.982 "abort_timeout_sec": 1, 00:17:40.982 "ack_timeout": 0, 00:17:40.982 "data_wr_pool_size": 0 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_create_subsystem", 00:17:40.982 "params": { 00:17:40.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.982 "allow_any_host": false, 00:17:40.982 "serial_number": "SPDK00000000000001", 00:17:40.982 "model_number": "SPDK bdev Controller", 00:17:40.982 "max_namespaces": 10, 00:17:40.982 "min_cntlid": 1, 00:17:40.982 "max_cntlid": 65519, 00:17:40.982 "ana_reporting": false 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_subsystem_add_host", 00:17:40.982 "params": { 00:17:40.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.982 "host": "nqn.2016-06.io.spdk:host1", 00:17:40.982 "psk": "key0" 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_subsystem_add_ns", 00:17:40.982 "params": { 00:17:40.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.982 "namespace": { 00:17:40.982 "nsid": 1, 00:17:40.982 "bdev_name": "malloc0", 00:17:40.982 "nguid": "FD0D4764A3F24E2190D44A7EF29EAB1D", 00:17:40.982 "uuid": "fd0d4764-a3f2-4e21-90d4-4a7ef29eab1d", 00:17:40.982 "no_auto_visible": false 00:17:40.982 } 00:17:40.982 } 00:17:40.982 }, 00:17:40.982 { 00:17:40.982 "method": "nvmf_subsystem_add_listener", 00:17:40.982 "params": { 00:17:40.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:17:40.982 "listen_address": { 00:17:40.982 "trtype": "TCP", 00:17:40.982 "adrfam": "IPv4", 00:17:40.982 "traddr": "10.0.0.3", 00:17:40.982 "trsvcid": "4420" 00:17:40.982 }, 00:17:40.982 "secure_channel": true 00:17:40.982 } 00:17:40.982 } 00:17:40.983 ] 00:17:40.983 } 00:17:40.983 ] 00:17:40.983 }' 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=74818 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 74818 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74818 ']' 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:40.983 01:37:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:40.983 [2024-11-17 01:37:49.350515] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:40.983 [2024-11-17 01:37:49.350677] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:41.242 [2024-11-17 01:37:49.518506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.242 [2024-11-17 01:37:49.612765] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:41.242 [2024-11-17 01:37:49.613018] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:41.242 [2024-11-17 01:37:49.613050] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:41.242 [2024-11-17 01:37:49.613075] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:41.242 [2024-11-17 01:37:49.613089] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:41.242 [2024-11-17 01:37:49.614224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:41.501 [2024-11-17 01:37:49.882654] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:41.760 [2024-11-17 01:37:50.028944] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:41.760 [2024-11-17 01:37:50.060879] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:41.760 [2024-11-17 01:37:50.061164] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:41.760 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:41.760 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:41.760 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:41.760 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:41.760 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.020 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=74849 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 74849 /var/tmp/bdevperf.sock 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 74849 ']' 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:17:42.020 01:37:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:17:42.020 "subsystems": [ 00:17:42.020 { 00:17:42.020 "subsystem": "keyring", 00:17:42.020 "config": [ 00:17:42.020 { 00:17:42.020 "method": "keyring_file_add_key", 00:17:42.020 "params": { 00:17:42.020 "name": "key0", 00:17:42.020 "path": "/tmp/tmp.7aOSk2rSxl" 00:17:42.020 } 00:17:42.020 } 00:17:42.020 ] 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "subsystem": "iobuf", 00:17:42.020 "config": [ 00:17:42.020 { 00:17:42.020 "method": "iobuf_set_options", 00:17:42.020 "params": { 00:17:42.020 "small_pool_count": 8192, 00:17:42.020 "large_pool_count": 1024, 00:17:42.020 "small_bufsize": 8192, 00:17:42.020 "large_bufsize": 135168, 00:17:42.020 "enable_numa": false 00:17:42.020 } 00:17:42.020 } 00:17:42.020 ] 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "subsystem": "sock", 00:17:42.020 "config": [ 00:17:42.020 { 00:17:42.020 "method": "sock_set_default_impl", 00:17:42.020 "params": { 00:17:42.020 "impl_name": "uring" 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "sock_impl_set_options", 00:17:42.020 "params": { 00:17:42.020 "impl_name": "ssl", 00:17:42.020 "recv_buf_size": 4096, 00:17:42.020 "send_buf_size": 4096, 00:17:42.020 "enable_recv_pipe": true, 00:17:42.020 "enable_quickack": false, 00:17:42.020 "enable_placement_id": 0, 00:17:42.020 "enable_zerocopy_send_server": true, 00:17:42.020 "enable_zerocopy_send_client": false, 00:17:42.020 "zerocopy_threshold": 0, 00:17:42.020 "tls_version": 0, 00:17:42.020 "enable_ktls": false 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "sock_impl_set_options", 00:17:42.020 "params": { 00:17:42.020 "impl_name": "posix", 00:17:42.020 "recv_buf_size": 2097152, 00:17:42.020 "send_buf_size": 2097152, 00:17:42.020 "enable_recv_pipe": true, 00:17:42.020 "enable_quickack": false, 00:17:42.020 "enable_placement_id": 0, 00:17:42.020 "enable_zerocopy_send_server": true, 00:17:42.020 "enable_zerocopy_send_client": false, 00:17:42.020 "zerocopy_threshold": 0, 00:17:42.020 "tls_version": 0, 00:17:42.020 "enable_ktls": false 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "sock_impl_set_options", 00:17:42.020 "params": { 00:17:42.020 "impl_name": "uring", 00:17:42.020 "recv_buf_size": 2097152, 00:17:42.020 "send_buf_size": 2097152, 00:17:42.020 "enable_recv_pipe": true, 00:17:42.020 "enable_quickack": false, 00:17:42.020 "enable_placement_id": 0, 00:17:42.020 "enable_zerocopy_send_server": false, 00:17:42.020 "enable_zerocopy_send_client": false, 00:17:42.020 "zerocopy_threshold": 0, 00:17:42.020 "tls_version": 0, 00:17:42.020 "enable_ktls": false 00:17:42.020 } 00:17:42.020 } 00:17:42.020 ] 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "subsystem": "vmd", 00:17:42.020 "config": [] 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "subsystem": "accel", 00:17:42.020 "config": [ 00:17:42.020 { 00:17:42.020 "method": "accel_set_options", 00:17:42.020 "params": { 00:17:42.020 "small_cache_size": 128, 00:17:42.020 "large_cache_size": 16, 00:17:42.020 "task_count": 2048, 00:17:42.020 "sequence_count": 2048, 00:17:42.020 "buf_count": 2048 00:17:42.020 } 00:17:42.020 } 00:17:42.020 ] 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "subsystem": "bdev", 00:17:42.020 "config": [ 00:17:42.020 { 00:17:42.020 "method": "bdev_set_options", 00:17:42.020 "params": { 00:17:42.020 "bdev_io_pool_size": 65535, 00:17:42.020 "bdev_io_cache_size": 256, 00:17:42.020 "bdev_auto_examine": true, 00:17:42.020 "iobuf_small_cache_size": 128, 00:17:42.020 "iobuf_large_cache_size": 16 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "bdev_raid_set_options", 00:17:42.020 "params": { 00:17:42.020 "process_window_size_kb": 1024, 00:17:42.020 "process_max_bandwidth_mb_sec": 0 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "bdev_iscsi_set_options", 00:17:42.020 "params": { 00:17:42.020 "timeout_sec": 30 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "bdev_nvme_set_options", 00:17:42.020 "params": { 00:17:42.020 "action_on_timeout": "none", 00:17:42.020 "timeout_us": 0, 00:17:42.020 "timeout_admin_us": 0, 00:17:42.020 "keep_alive_timeout_ms": 10000, 00:17:42.020 "arbitration_burst": 0, 00:17:42.020 "low_priority_weight": 0, 00:17:42.020 "medium_priority_weight": 0, 00:17:42.020 "high_priority_weight": 0, 00:17:42.020 "nvme_adminq_poll_period_us": 10000, 00:17:42.020 "nvme_ioq_poll_period_us": 0, 00:17:42.020 "io_queue_requests": 512, 00:17:42.020 "delay_cmd_submit": true, 00:17:42.020 "transport_retry_count": 4, 00:17:42.020 "bdev_retry_count": 3, 00:17:42.020 "transport_ack_timeout": 0, 00:17:42.020 "ctrlr_loss_timeout_sec": 0, 00:17:42.020 "reconnect_delay_sec": 0, 00:17:42.020 "fast_io_fail_timeout_sec": 0, 00:17:42.020 "disable_auto_failback": false, 00:17:42.020 "generate_uuids": false, 00:17:42.020 "transport_tos": 0, 00:17:42.020 "nvme_error_stat": false, 00:17:42.020 "rdma_srq_size": 0, 00:17:42.020 "io_path_stat": false, 00:17:42.020 "allow_accel_sequence": false, 00:17:42.020 "rdma_max_cq_size": 0, 00:17:42.020 "rdma_cm_event_timeout_ms": 0, 00:17:42.020 "dhchap_digests": [ 00:17:42.020 "sha256", 00:17:42.020 "sha384", 00:17:42.020 "sha512" 00:17:42.020 ], 00:17:42.020 "dhchap_dhgroups": [ 00:17:42.020 "null", 00:17:42.020 "ffdhe2048", 00:17:42.020 "ffdhe3072", 00:17:42.020 "ffdhe4096", 00:17:42.020 "ffdhe6144", 00:17:42.020 "ffdhe8192" 00:17:42.020 ] 00:17:42.020 } 00:17:42.020 }, 00:17:42.020 { 00:17:42.020 "method": "bdev_nvme_attach_controller", 00:17:42.020 "params": { 00:17:42.020 "name": "TLSTEST", 00:17:42.020 "trtype": "TCP", 00:17:42.020 "adrfam": "IPv4", 00:17:42.020 "traddr": "10.0.0.3", 00:17:42.021 "trsvcid": "4420", 00:17:42.021 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:42.021 "prchk_reftag": false, 00:17:42.021 "prchk_guard": false, 00:17:42.021 "ctrlr_loss_timeout_sec": 0, 00:17:42.021 "reconnect_delay_sec": 0, 00:17:42.021 "fast_io_fail_timeout_sec": 0, 00:17:42.021 "psk": "key0", 00:17:42.021 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.021 "hdgst": false, 00:17:42.021 "ddgst": false, 00:17:42.021 "multipath": "multipath" 00:17:42.021 } 00:17:42.021 }, 00:17:42.021 { 00:17:42.021 "method": "bdev_nvme_set_hotplug", 00:17:42.021 "params": { 00:17:42.021 "period_us": 100000, 00:17:42.021 "enable": false 00:17:42.021 } 00:17:42.021 }, 00:17:42.021 { 00:17:42.021 "method": "bdev_wait_for_examine" 00:17:42.021 } 00:17:42.021 ] 00:17:42.021 }, 00:17:42.021 { 00:17:42.021 "subsystem": "nbd", 00:17:42.021 "config": [] 00:17:42.021 } 00:17:42.021 ] 00:17:42.021 }' 00:17:42.021 [2024-11-17 01:37:50.359451] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:42.021 [2024-11-17 01:37:50.359646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74849 ] 00:17:42.280 [2024-11-17 01:37:50.546442] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.280 [2024-11-17 01:37:50.669989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.539 [2024-11-17 01:37:50.925680] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:42.799 [2024-11-17 01:37:51.031724] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:43.058 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:43.058 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:43.058 01:37:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:43.058 Running I/O for 10 seconds... 00:17:45.373 3072.00 IOPS, 12.00 MiB/s [2024-11-17T01:37:54.400Z] 3106.00 IOPS, 12.13 MiB/s [2024-11-17T01:37:55.776Z] 3122.33 IOPS, 12.20 MiB/s [2024-11-17T01:37:56.712Z] 3170.75 IOPS, 12.39 MiB/s [2024-11-17T01:37:57.650Z] 3169.00 IOPS, 12.38 MiB/s [2024-11-17T01:37:58.587Z] 3180.17 IOPS, 12.42 MiB/s [2024-11-17T01:37:59.523Z] 3192.14 IOPS, 12.47 MiB/s [2024-11-17T01:38:00.460Z] 3206.25 IOPS, 12.52 MiB/s [2024-11-17T01:38:01.838Z] 3212.44 IOPS, 12.55 MiB/s [2024-11-17T01:38:01.838Z] 3226.50 IOPS, 12.60 MiB/s 00:17:53.379 Latency(us) 00:17:53.379 [2024-11-17T01:38:01.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.379 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:53.379 Verification LBA range: start 0x0 length 0x2000 00:17:53.379 TLSTESTn1 : 10.02 3231.20 12.62 0.00 0.00 39536.57 8162.21 42181.35 00:17:53.379 [2024-11-17T01:38:01.838Z] =================================================================================================================== 00:17:53.379 [2024-11-17T01:38:01.838Z] Total : 3231.20 12.62 0.00 0.00 39536.57 8162.21 42181.35 00:17:53.379 { 00:17:53.379 "results": [ 00:17:53.379 { 00:17:53.379 "job": "TLSTESTn1", 00:17:53.379 "core_mask": "0x4", 00:17:53.379 "workload": "verify", 00:17:53.379 "status": "finished", 00:17:53.379 "verify_range": { 00:17:53.379 "start": 0, 00:17:53.379 "length": 8192 00:17:53.379 }, 00:17:53.379 "queue_depth": 128, 00:17:53.379 "io_size": 4096, 00:17:53.379 "runtime": 10.024464, 00:17:53.379 "iops": 3231.1952040528054, 00:17:53.379 "mibps": 12.621856265831271, 00:17:53.379 "io_failed": 0, 00:17:53.379 "io_timeout": 0, 00:17:53.379 "avg_latency_us": 39536.565937676285, 00:17:53.379 "min_latency_us": 8162.210909090909, 00:17:53.379 "max_latency_us": 42181.35272727273 00:17:53.379 } 00:17:53.379 ], 00:17:53.379 "core_count": 1 00:17:53.379 } 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 74849 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74849 ']' 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74849 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74849 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:53.379 killing process with pid 74849 00:17:53.379 Received shutdown signal, test time was about 10.000000 seconds 00:17:53.379 00:17:53.379 Latency(us) 00:17:53.379 [2024-11-17T01:38:01.838Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:53.379 [2024-11-17T01:38:01.838Z] =================================================================================================================== 00:17:53.379 [2024-11-17T01:38:01.838Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74849' 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74849 00:17:53.379 01:38:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74849 00:17:53.948 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 74818 00:17:53.948 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 74818 ']' 00:17:53.948 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 74818 00:17:53.948 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:53.948 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:53.948 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74818 00:17:54.207 killing process with pid 74818 00:17:54.207 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:54.207 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:54.207 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74818' 00:17:54.207 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 74818 00:17:54.207 01:38:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 74818 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75002 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75002 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75002 ']' 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.145 01:38:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.145 [2024-11-17 01:38:03.508394] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:55.145 [2024-11-17 01:38:03.508573] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:55.404 [2024-11-17 01:38:03.694930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.404 [2024-11-17 01:38:03.819141] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:55.404 [2024-11-17 01:38:03.819220] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:55.404 [2024-11-17 01:38:03.819254] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:55.404 [2024-11-17 01:38:03.819282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:55.404 [2024-11-17 01:38:03.819299] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:55.404 [2024-11-17 01:38:03.820670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:55.664 [2024-11-17 01:38:03.986030] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.7aOSk2rSxl 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.7aOSk2rSxl 00:17:56.230 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:56.230 [2024-11-17 01:38:04.680666] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:56.489 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:56.748 01:38:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -k 00:17:56.748 [2024-11-17 01:38:05.200875] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:56.748 [2024-11-17 01:38:05.201173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:17:57.006 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:57.264 malloc0 00:17:57.264 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:57.523 01:38:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=75062 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 75062 /var/tmp/bdevperf.sock 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75062 ']' 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.782 01:38:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.041 [2024-11-17 01:38:06.337670] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:17:58.041 [2024-11-17 01:38:06.338094] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75062 ] 00:17:58.299 [2024-11-17 01:38:06.523205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.299 [2024-11-17 01:38:06.632007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.558 [2024-11-17 01:38:06.791919] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:17:59.126 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.126 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.126 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:17:59.126 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:59.384 [2024-11-17 01:38:07.739010] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:59.384 nvme0n1 00:17:59.385 01:38:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:59.643 Running I/O for 1 seconds... 00:18:00.577 3114.00 IOPS, 12.16 MiB/s 00:18:00.577 Latency(us) 00:18:00.577 [2024-11-17T01:38:09.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.577 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:00.577 Verification LBA range: start 0x0 length 0x2000 00:18:00.577 nvme0n1 : 1.03 3149.35 12.30 0.00 0.00 39982.15 6166.34 32648.84 00:18:00.577 [2024-11-17T01:38:09.036Z] =================================================================================================================== 00:18:00.577 [2024-11-17T01:38:09.036Z] Total : 3149.35 12.30 0.00 0.00 39982.15 6166.34 32648.84 00:18:00.577 { 00:18:00.577 "results": [ 00:18:00.577 { 00:18:00.577 "job": "nvme0n1", 00:18:00.577 "core_mask": "0x2", 00:18:00.577 "workload": "verify", 00:18:00.577 "status": "finished", 00:18:00.577 "verify_range": { 00:18:00.577 "start": 0, 00:18:00.577 "length": 8192 00:18:00.577 }, 00:18:00.577 "queue_depth": 128, 00:18:00.577 "io_size": 4096, 00:18:00.577 "runtime": 1.029419, 00:18:00.577 "iops": 3149.349293144968, 00:18:00.577 "mibps": 12.302145676347532, 00:18:00.577 "io_failed": 0, 00:18:00.577 "io_timeout": 0, 00:18:00.577 "avg_latency_us": 39982.151497953004, 00:18:00.577 "min_latency_us": 6166.341818181818, 00:18:00.577 "max_latency_us": 32648.843636363636 00:18:00.577 } 00:18:00.577 ], 00:18:00.577 "core_count": 1 00:18:00.577 } 00:18:00.577 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 75062 00:18:00.577 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75062 ']' 00:18:00.577 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75062 00:18:00.577 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:00.577 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.577 01:38:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75062 00:18:00.577 killing process with pid 75062 00:18:00.577 Received shutdown signal, test time was about 1.000000 seconds 00:18:00.577 00:18:00.577 Latency(us) 00:18:00.577 [2024-11-17T01:38:09.036Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.577 [2024-11-17T01:38:09.036Z] =================================================================================================================== 00:18:00.577 [2024-11-17T01:38:09.036Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:00.577 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:00.577 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:00.577 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75062' 00:18:00.577 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75062 00:18:00.577 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75062 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 75002 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75002 ']' 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75002 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75002 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:01.515 killing process with pid 75002 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75002' 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75002 00:18:01.515 01:38:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75002 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75126 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75126 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75126 ']' 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:02.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:02.452 01:38:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:02.452 [2024-11-17 01:38:10.828980] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:02.452 [2024-11-17 01:38:10.829105] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:02.711 [2024-11-17 01:38:10.991954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:02.711 [2024-11-17 01:38:11.084267] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:02.711 [2024-11-17 01:38:11.084347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:02.711 [2024-11-17 01:38:11.084382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:02.711 [2024-11-17 01:38:11.084404] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:02.711 [2024-11-17 01:38:11.084417] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:02.711 [2024-11-17 01:38:11.085645] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.970 [2024-11-17 01:38:11.244231] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.536 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.536 [2024-11-17 01:38:11.818057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:03.536 malloc0 00:18:03.537 [2024-11-17 01:38:11.865284] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:03.537 [2024-11-17 01:38:11.865656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=75163 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 75163 /var/tmp/bdevperf.sock 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75163 ']' 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:03.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:03.537 01:38:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.795 [2024-11-17 01:38:12.005612] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:03.795 [2024-11-17 01:38:12.005819] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75163 ] 00:18:03.795 [2024-11-17 01:38:12.191707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.053 [2024-11-17 01:38:12.316438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:04.053 [2024-11-17 01:38:12.497132] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:04.619 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:04.619 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:04.619 01:38:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.7aOSk2rSxl 00:18:04.878 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:05.136 [2024-11-17 01:38:13.366640] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:05.136 nvme0n1 00:18:05.136 01:38:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:05.136 Running I/O for 1 seconds... 00:18:06.513 3200.00 IOPS, 12.50 MiB/s 00:18:06.513 Latency(us) 00:18:06.513 [2024-11-17T01:38:14.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.513 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:06.513 Verification LBA range: start 0x0 length 0x2000 00:18:06.513 nvme0n1 : 1.03 3226.94 12.61 0.00 0.00 39132.63 8698.41 25141.99 00:18:06.513 [2024-11-17T01:38:14.972Z] =================================================================================================================== 00:18:06.513 [2024-11-17T01:38:14.972Z] Total : 3226.94 12.61 0.00 0.00 39132.63 8698.41 25141.99 00:18:06.513 { 00:18:06.513 "results": [ 00:18:06.513 { 00:18:06.513 "job": "nvme0n1", 00:18:06.513 "core_mask": "0x2", 00:18:06.513 "workload": "verify", 00:18:06.513 "status": "finished", 00:18:06.513 "verify_range": { 00:18:06.513 "start": 0, 00:18:06.513 "length": 8192 00:18:06.513 }, 00:18:06.513 "queue_depth": 128, 00:18:06.513 "io_size": 4096, 00:18:06.513 "runtime": 1.031318, 00:18:06.513 "iops": 3226.938732767197, 00:18:06.513 "mibps": 12.605229424871863, 00:18:06.513 "io_failed": 0, 00:18:06.513 "io_timeout": 0, 00:18:06.513 "avg_latency_us": 39132.634405594414, 00:18:06.513 "min_latency_us": 8698.414545454545, 00:18:06.513 "max_latency_us": 25141.992727272725 00:18:06.513 } 00:18:06.513 ], 00:18:06.513 "core_count": 1 00:18:06.513 } 00:18:06.513 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:18:06.513 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.513 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.513 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.513 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:18:06.513 "subsystems": [ 00:18:06.513 { 00:18:06.513 "subsystem": "keyring", 00:18:06.513 "config": [ 00:18:06.513 { 00:18:06.513 "method": "keyring_file_add_key", 00:18:06.513 "params": { 00:18:06.513 "name": "key0", 00:18:06.513 "path": "/tmp/tmp.7aOSk2rSxl" 00:18:06.513 } 00:18:06.513 } 00:18:06.513 ] 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "subsystem": "iobuf", 00:18:06.513 "config": [ 00:18:06.513 { 00:18:06.513 "method": "iobuf_set_options", 00:18:06.513 "params": { 00:18:06.513 "small_pool_count": 8192, 00:18:06.513 "large_pool_count": 1024, 00:18:06.513 "small_bufsize": 8192, 00:18:06.513 "large_bufsize": 135168, 00:18:06.513 "enable_numa": false 00:18:06.513 } 00:18:06.513 } 00:18:06.513 ] 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "subsystem": "sock", 00:18:06.513 "config": [ 00:18:06.513 { 00:18:06.513 "method": "sock_set_default_impl", 00:18:06.513 "params": { 00:18:06.513 "impl_name": "uring" 00:18:06.513 } 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "method": "sock_impl_set_options", 00:18:06.513 "params": { 00:18:06.513 "impl_name": "ssl", 00:18:06.513 "recv_buf_size": 4096, 00:18:06.513 "send_buf_size": 4096, 00:18:06.513 "enable_recv_pipe": true, 00:18:06.513 "enable_quickack": false, 00:18:06.513 "enable_placement_id": 0, 00:18:06.513 "enable_zerocopy_send_server": true, 00:18:06.513 "enable_zerocopy_send_client": false, 00:18:06.513 "zerocopy_threshold": 0, 00:18:06.513 "tls_version": 0, 00:18:06.513 "enable_ktls": false 00:18:06.513 } 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "method": "sock_impl_set_options", 00:18:06.513 "params": { 00:18:06.513 "impl_name": "posix", 00:18:06.513 "recv_buf_size": 2097152, 00:18:06.513 "send_buf_size": 2097152, 00:18:06.513 "enable_recv_pipe": true, 00:18:06.513 "enable_quickack": false, 00:18:06.513 "enable_placement_id": 0, 00:18:06.513 "enable_zerocopy_send_server": true, 00:18:06.513 "enable_zerocopy_send_client": false, 00:18:06.513 "zerocopy_threshold": 0, 00:18:06.513 "tls_version": 0, 00:18:06.513 "enable_ktls": false 00:18:06.513 } 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "method": "sock_impl_set_options", 00:18:06.513 "params": { 00:18:06.513 "impl_name": "uring", 00:18:06.513 "recv_buf_size": 2097152, 00:18:06.513 "send_buf_size": 2097152, 00:18:06.513 "enable_recv_pipe": true, 00:18:06.513 "enable_quickack": false, 00:18:06.513 "enable_placement_id": 0, 00:18:06.513 "enable_zerocopy_send_server": false, 00:18:06.513 "enable_zerocopy_send_client": false, 00:18:06.513 "zerocopy_threshold": 0, 00:18:06.513 "tls_version": 0, 00:18:06.513 "enable_ktls": false 00:18:06.513 } 00:18:06.513 } 00:18:06.513 ] 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "subsystem": "vmd", 00:18:06.513 "config": [] 00:18:06.513 }, 00:18:06.513 { 00:18:06.513 "subsystem": "accel", 00:18:06.513 "config": [ 00:18:06.513 { 00:18:06.513 "method": "accel_set_options", 00:18:06.513 "params": { 00:18:06.513 "small_cache_size": 128, 00:18:06.513 "large_cache_size": 16, 00:18:06.513 "task_count": 2048, 00:18:06.513 "sequence_count": 2048, 00:18:06.514 "buf_count": 2048 00:18:06.514 } 00:18:06.514 } 00:18:06.514 ] 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "subsystem": "bdev", 00:18:06.514 "config": [ 00:18:06.514 { 00:18:06.514 "method": "bdev_set_options", 00:18:06.514 "params": { 00:18:06.514 "bdev_io_pool_size": 65535, 00:18:06.514 "bdev_io_cache_size": 256, 00:18:06.514 "bdev_auto_examine": true, 00:18:06.514 "iobuf_small_cache_size": 128, 00:18:06.514 "iobuf_large_cache_size": 16 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "bdev_raid_set_options", 00:18:06.514 "params": { 00:18:06.514 "process_window_size_kb": 1024, 00:18:06.514 "process_max_bandwidth_mb_sec": 0 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "bdev_iscsi_set_options", 00:18:06.514 "params": { 00:18:06.514 "timeout_sec": 30 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "bdev_nvme_set_options", 00:18:06.514 "params": { 00:18:06.514 "action_on_timeout": "none", 00:18:06.514 "timeout_us": 0, 00:18:06.514 "timeout_admin_us": 0, 00:18:06.514 "keep_alive_timeout_ms": 10000, 00:18:06.514 "arbitration_burst": 0, 00:18:06.514 "low_priority_weight": 0, 00:18:06.514 "medium_priority_weight": 0, 00:18:06.514 "high_priority_weight": 0, 00:18:06.514 "nvme_adminq_poll_period_us": 10000, 00:18:06.514 "nvme_ioq_poll_period_us": 0, 00:18:06.514 "io_queue_requests": 0, 00:18:06.514 "delay_cmd_submit": true, 00:18:06.514 "transport_retry_count": 4, 00:18:06.514 "bdev_retry_count": 3, 00:18:06.514 "transport_ack_timeout": 0, 00:18:06.514 "ctrlr_loss_timeout_sec": 0, 00:18:06.514 "reconnect_delay_sec": 0, 00:18:06.514 "fast_io_fail_timeout_sec": 0, 00:18:06.514 "disable_auto_failback": false, 00:18:06.514 "generate_uuids": false, 00:18:06.514 "transport_tos": 0, 00:18:06.514 "nvme_error_stat": false, 00:18:06.514 "rdma_srq_size": 0, 00:18:06.514 "io_path_stat": false, 00:18:06.514 "allow_accel_sequence": false, 00:18:06.514 "rdma_max_cq_size": 0, 00:18:06.514 "rdma_cm_event_timeout_ms": 0, 00:18:06.514 "dhchap_digests": [ 00:18:06.514 "sha256", 00:18:06.514 "sha384", 00:18:06.514 "sha512" 00:18:06.514 ], 00:18:06.514 "dhchap_dhgroups": [ 00:18:06.514 "null", 00:18:06.514 "ffdhe2048", 00:18:06.514 "ffdhe3072", 00:18:06.514 "ffdhe4096", 00:18:06.514 "ffdhe6144", 00:18:06.514 "ffdhe8192" 00:18:06.514 ] 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "bdev_nvme_set_hotplug", 00:18:06.514 "params": { 00:18:06.514 "period_us": 100000, 00:18:06.514 "enable": false 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "bdev_malloc_create", 00:18:06.514 "params": { 00:18:06.514 "name": "malloc0", 00:18:06.514 "num_blocks": 8192, 00:18:06.514 "block_size": 4096, 00:18:06.514 "physical_block_size": 4096, 00:18:06.514 "uuid": "f4775269-e49b-4135-b370-ca66fd68c46a", 00:18:06.514 "optimal_io_boundary": 0, 00:18:06.514 "md_size": 0, 00:18:06.514 "dif_type": 0, 00:18:06.514 "dif_is_head_of_md": false, 00:18:06.514 "dif_pi_format": 0 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "bdev_wait_for_examine" 00:18:06.514 } 00:18:06.514 ] 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "subsystem": "nbd", 00:18:06.514 "config": [] 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "subsystem": "scheduler", 00:18:06.514 "config": [ 00:18:06.514 { 00:18:06.514 "method": "framework_set_scheduler", 00:18:06.514 "params": { 00:18:06.514 "name": "static" 00:18:06.514 } 00:18:06.514 } 00:18:06.514 ] 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "subsystem": "nvmf", 00:18:06.514 "config": [ 00:18:06.514 { 00:18:06.514 "method": "nvmf_set_config", 00:18:06.514 "params": { 00:18:06.514 "discovery_filter": "match_any", 00:18:06.514 "admin_cmd_passthru": { 00:18:06.514 "identify_ctrlr": false 00:18:06.514 }, 00:18:06.514 "dhchap_digests": [ 00:18:06.514 "sha256", 00:18:06.514 "sha384", 00:18:06.514 "sha512" 00:18:06.514 ], 00:18:06.514 "dhchap_dhgroups": [ 00:18:06.514 "null", 00:18:06.514 "ffdhe2048", 00:18:06.514 "ffdhe3072", 00:18:06.514 "ffdhe4096", 00:18:06.514 "ffdhe6144", 00:18:06.514 "ffdhe8192" 00:18:06.514 ] 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_set_max_subsystems", 00:18:06.514 "params": { 00:18:06.514 "max_subsystems": 1024 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_set_crdt", 00:18:06.514 "params": { 00:18:06.514 "crdt1": 0, 00:18:06.514 "crdt2": 0, 00:18:06.514 "crdt3": 0 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_create_transport", 00:18:06.514 "params": { 00:18:06.514 "trtype": "TCP", 00:18:06.514 "max_queue_depth": 128, 00:18:06.514 "max_io_qpairs_per_ctrlr": 127, 00:18:06.514 "in_capsule_data_size": 4096, 00:18:06.514 "max_io_size": 131072, 00:18:06.514 "io_unit_size": 131072, 00:18:06.514 "max_aq_depth": 128, 00:18:06.514 "num_shared_buffers": 511, 00:18:06.514 "buf_cache_size": 4294967295, 00:18:06.514 "dif_insert_or_strip": false, 00:18:06.514 "zcopy": false, 00:18:06.514 "c2h_success": false, 00:18:06.514 "sock_priority": 0, 00:18:06.514 "abort_timeout_sec": 1, 00:18:06.514 "ack_timeout": 0, 00:18:06.514 "data_wr_pool_size": 0 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_create_subsystem", 00:18:06.514 "params": { 00:18:06.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.514 "allow_any_host": false, 00:18:06.514 "serial_number": "00000000000000000000", 00:18:06.514 "model_number": "SPDK bdev Controller", 00:18:06.514 "max_namespaces": 32, 00:18:06.514 "min_cntlid": 1, 00:18:06.514 "max_cntlid": 65519, 00:18:06.514 "ana_reporting": false 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_subsystem_add_host", 00:18:06.514 "params": { 00:18:06.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.514 "host": "nqn.2016-06.io.spdk:host1", 00:18:06.514 "psk": "key0" 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_subsystem_add_ns", 00:18:06.514 "params": { 00:18:06.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.514 "namespace": { 00:18:06.514 "nsid": 1, 00:18:06.514 "bdev_name": "malloc0", 00:18:06.514 "nguid": "F4775269E49B4135B370CA66FD68C46A", 00:18:06.514 "uuid": "f4775269-e49b-4135-b370-ca66fd68c46a", 00:18:06.514 "no_auto_visible": false 00:18:06.514 } 00:18:06.514 } 00:18:06.514 }, 00:18:06.514 { 00:18:06.514 "method": "nvmf_subsystem_add_listener", 00:18:06.514 "params": { 00:18:06.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.514 "listen_address": { 00:18:06.514 "trtype": "TCP", 00:18:06.514 "adrfam": "IPv4", 00:18:06.514 "traddr": "10.0.0.3", 00:18:06.514 "trsvcid": "4420" 00:18:06.514 }, 00:18:06.514 "secure_channel": false, 00:18:06.514 "sock_impl": "ssl" 00:18:06.514 } 00:18:06.514 } 00:18:06.514 ] 00:18:06.514 } 00:18:06.514 ] 00:18:06.514 }' 00:18:06.514 01:38:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:06.774 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:18:06.774 "subsystems": [ 00:18:06.774 { 00:18:06.774 "subsystem": "keyring", 00:18:06.774 "config": [ 00:18:06.774 { 00:18:06.774 "method": "keyring_file_add_key", 00:18:06.774 "params": { 00:18:06.774 "name": "key0", 00:18:06.774 "path": "/tmp/tmp.7aOSk2rSxl" 00:18:06.774 } 00:18:06.774 } 00:18:06.774 ] 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "subsystem": "iobuf", 00:18:06.774 "config": [ 00:18:06.774 { 00:18:06.774 "method": "iobuf_set_options", 00:18:06.774 "params": { 00:18:06.774 "small_pool_count": 8192, 00:18:06.774 "large_pool_count": 1024, 00:18:06.774 "small_bufsize": 8192, 00:18:06.774 "large_bufsize": 135168, 00:18:06.774 "enable_numa": false 00:18:06.774 } 00:18:06.774 } 00:18:06.774 ] 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "subsystem": "sock", 00:18:06.774 "config": [ 00:18:06.774 { 00:18:06.774 "method": "sock_set_default_impl", 00:18:06.774 "params": { 00:18:06.774 "impl_name": "uring" 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "sock_impl_set_options", 00:18:06.774 "params": { 00:18:06.774 "impl_name": "ssl", 00:18:06.774 "recv_buf_size": 4096, 00:18:06.774 "send_buf_size": 4096, 00:18:06.774 "enable_recv_pipe": true, 00:18:06.774 "enable_quickack": false, 00:18:06.774 "enable_placement_id": 0, 00:18:06.774 "enable_zerocopy_send_server": true, 00:18:06.774 "enable_zerocopy_send_client": false, 00:18:06.774 "zerocopy_threshold": 0, 00:18:06.774 "tls_version": 0, 00:18:06.774 "enable_ktls": false 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "sock_impl_set_options", 00:18:06.774 "params": { 00:18:06.774 "impl_name": "posix", 00:18:06.774 "recv_buf_size": 2097152, 00:18:06.774 "send_buf_size": 2097152, 00:18:06.774 "enable_recv_pipe": true, 00:18:06.774 "enable_quickack": false, 00:18:06.774 "enable_placement_id": 0, 00:18:06.774 "enable_zerocopy_send_server": true, 00:18:06.774 "enable_zerocopy_send_client": false, 00:18:06.774 "zerocopy_threshold": 0, 00:18:06.774 "tls_version": 0, 00:18:06.774 "enable_ktls": false 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "sock_impl_set_options", 00:18:06.774 "params": { 00:18:06.774 "impl_name": "uring", 00:18:06.774 "recv_buf_size": 2097152, 00:18:06.774 "send_buf_size": 2097152, 00:18:06.774 "enable_recv_pipe": true, 00:18:06.774 "enable_quickack": false, 00:18:06.774 "enable_placement_id": 0, 00:18:06.774 "enable_zerocopy_send_server": false, 00:18:06.774 "enable_zerocopy_send_client": false, 00:18:06.774 "zerocopy_threshold": 0, 00:18:06.774 "tls_version": 0, 00:18:06.774 "enable_ktls": false 00:18:06.774 } 00:18:06.774 } 00:18:06.774 ] 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "subsystem": "vmd", 00:18:06.774 "config": [] 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "subsystem": "accel", 00:18:06.774 "config": [ 00:18:06.774 { 00:18:06.774 "method": "accel_set_options", 00:18:06.774 "params": { 00:18:06.774 "small_cache_size": 128, 00:18:06.774 "large_cache_size": 16, 00:18:06.774 "task_count": 2048, 00:18:06.774 "sequence_count": 2048, 00:18:06.774 "buf_count": 2048 00:18:06.774 } 00:18:06.774 } 00:18:06.774 ] 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "subsystem": "bdev", 00:18:06.774 "config": [ 00:18:06.774 { 00:18:06.774 "method": "bdev_set_options", 00:18:06.774 "params": { 00:18:06.774 "bdev_io_pool_size": 65535, 00:18:06.774 "bdev_io_cache_size": 256, 00:18:06.774 "bdev_auto_examine": true, 00:18:06.774 "iobuf_small_cache_size": 128, 00:18:06.774 "iobuf_large_cache_size": 16 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "bdev_raid_set_options", 00:18:06.774 "params": { 00:18:06.774 "process_window_size_kb": 1024, 00:18:06.774 "process_max_bandwidth_mb_sec": 0 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "bdev_iscsi_set_options", 00:18:06.774 "params": { 00:18:06.774 "timeout_sec": 30 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "bdev_nvme_set_options", 00:18:06.774 "params": { 00:18:06.774 "action_on_timeout": "none", 00:18:06.774 "timeout_us": 0, 00:18:06.774 "timeout_admin_us": 0, 00:18:06.774 "keep_alive_timeout_ms": 10000, 00:18:06.774 "arbitration_burst": 0, 00:18:06.774 "low_priority_weight": 0, 00:18:06.774 "medium_priority_weight": 0, 00:18:06.774 "high_priority_weight": 0, 00:18:06.774 "nvme_adminq_poll_period_us": 10000, 00:18:06.774 "nvme_ioq_poll_period_us": 0, 00:18:06.774 "io_queue_requests": 512, 00:18:06.774 "delay_cmd_submit": true, 00:18:06.774 "transport_retry_count": 4, 00:18:06.774 "bdev_retry_count": 3, 00:18:06.774 "transport_ack_timeout": 0, 00:18:06.774 "ctrlr_loss_timeout_sec": 0, 00:18:06.774 "reconnect_delay_sec": 0, 00:18:06.774 "fast_io_fail_timeout_sec": 0, 00:18:06.774 "disable_auto_failback": false, 00:18:06.774 "generate_uuids": false, 00:18:06.774 "transport_tos": 0, 00:18:06.774 "nvme_error_stat": false, 00:18:06.774 "rdma_srq_size": 0, 00:18:06.774 "io_path_stat": false, 00:18:06.774 "allow_accel_sequence": false, 00:18:06.774 "rdma_max_cq_size": 0, 00:18:06.774 "rdma_cm_event_timeout_ms": 0, 00:18:06.774 "dhchap_digests": [ 00:18:06.774 "sha256", 00:18:06.774 "sha384", 00:18:06.774 "sha512" 00:18:06.774 ], 00:18:06.774 "dhchap_dhgroups": [ 00:18:06.774 "null", 00:18:06.774 "ffdhe2048", 00:18:06.774 "ffdhe3072", 00:18:06.774 "ffdhe4096", 00:18:06.774 "ffdhe6144", 00:18:06.774 "ffdhe8192" 00:18:06.774 ] 00:18:06.774 } 00:18:06.774 }, 00:18:06.774 { 00:18:06.774 "method": "bdev_nvme_attach_controller", 00:18:06.774 "params": { 00:18:06.774 "name": "nvme0", 00:18:06.774 "trtype": "TCP", 00:18:06.774 "adrfam": "IPv4", 00:18:06.774 "traddr": "10.0.0.3", 00:18:06.774 "trsvcid": "4420", 00:18:06.774 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.774 "prchk_reftag": false, 00:18:06.774 "prchk_guard": false, 00:18:06.774 "ctrlr_loss_timeout_sec": 0, 00:18:06.774 "reconnect_delay_sec": 0, 00:18:06.774 "fast_io_fail_timeout_sec": 0, 00:18:06.774 "psk": "key0", 00:18:06.774 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.774 "hdgst": false, 00:18:06.774 "ddgst": false, 00:18:06.774 "multipath": "multipath" 00:18:06.775 } 00:18:06.775 }, 00:18:06.775 { 00:18:06.775 "method": "bdev_nvme_set_hotplug", 00:18:06.775 "params": { 00:18:06.775 "period_us": 100000, 00:18:06.775 "enable": false 00:18:06.775 } 00:18:06.775 }, 00:18:06.775 { 00:18:06.775 "method": "bdev_enable_histogram", 00:18:06.775 "params": { 00:18:06.775 "name": "nvme0n1", 00:18:06.775 "enable": true 00:18:06.775 } 00:18:06.775 }, 00:18:06.775 { 00:18:06.775 "method": "bdev_wait_for_examine" 00:18:06.775 } 00:18:06.775 ] 00:18:06.775 }, 00:18:06.775 { 00:18:06.775 "subsystem": "nbd", 00:18:06.775 "config": [] 00:18:06.775 } 00:18:06.775 ] 00:18:06.775 }' 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 75163 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75163 ']' 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75163 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75163 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:06.775 killing process with pid 75163 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75163' 00:18:06.775 Received shutdown signal, test time was about 1.000000 seconds 00:18:06.775 00:18:06.775 Latency(us) 00:18:06.775 [2024-11-17T01:38:15.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:06.775 [2024-11-17T01:38:15.234Z] =================================================================================================================== 00:18:06.775 [2024-11-17T01:38:15.234Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75163 00:18:06.775 01:38:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75163 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 75126 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75126 ']' 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75126 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75126 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.712 killing process with pid 75126 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75126' 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75126 00:18:07.712 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75126 00:18:08.647 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:18:08.647 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:08.647 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:18:08.647 "subsystems": [ 00:18:08.647 { 00:18:08.647 "subsystem": "keyring", 00:18:08.647 "config": [ 00:18:08.647 { 00:18:08.647 "method": "keyring_file_add_key", 00:18:08.647 "params": { 00:18:08.647 "name": "key0", 00:18:08.647 "path": "/tmp/tmp.7aOSk2rSxl" 00:18:08.647 } 00:18:08.647 } 00:18:08.647 ] 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "subsystem": "iobuf", 00:18:08.647 "config": [ 00:18:08.647 { 00:18:08.647 "method": "iobuf_set_options", 00:18:08.647 "params": { 00:18:08.647 "small_pool_count": 8192, 00:18:08.647 "large_pool_count": 1024, 00:18:08.647 "small_bufsize": 8192, 00:18:08.647 "large_bufsize": 135168, 00:18:08.647 "enable_numa": false 00:18:08.647 } 00:18:08.647 } 00:18:08.647 ] 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "subsystem": "sock", 00:18:08.647 "config": [ 00:18:08.647 { 00:18:08.647 "method": "sock_set_default_impl", 00:18:08.647 "params": { 00:18:08.647 "impl_name": "uring" 00:18:08.647 } 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "method": "sock_impl_set_options", 00:18:08.647 "params": { 00:18:08.647 "impl_name": "ssl", 00:18:08.647 "recv_buf_size": 4096, 00:18:08.647 "send_buf_size": 4096, 00:18:08.647 "enable_recv_pipe": true, 00:18:08.647 "enable_quickack": false, 00:18:08.647 "enable_placement_id": 0, 00:18:08.647 "enable_zerocopy_send_server": true, 00:18:08.647 "enable_zerocopy_send_client": false, 00:18:08.647 "zerocopy_threshold": 0, 00:18:08.647 "tls_version": 0, 00:18:08.647 "enable_ktls": false 00:18:08.647 } 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "method": "sock_impl_set_options", 00:18:08.647 "params": { 00:18:08.647 "impl_name": "posix", 00:18:08.647 "recv_buf_size": 2097152, 00:18:08.647 "send_buf_size": 2097152, 00:18:08.647 "enable_recv_pipe": true, 00:18:08.647 "enable_quickack": false, 00:18:08.647 "enable_placement_id": 0, 00:18:08.647 "enable_zerocopy_send_server": true, 00:18:08.647 "enable_zerocopy_send_client": false, 00:18:08.647 "zerocopy_threshold": 0, 00:18:08.647 "tls_version": 0, 00:18:08.647 "enable_ktls": false 00:18:08.647 } 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "method": "sock_impl_set_options", 00:18:08.647 "params": { 00:18:08.647 "impl_name": "uring", 00:18:08.647 "recv_buf_size": 2097152, 00:18:08.647 "send_buf_size": 2097152, 00:18:08.647 "enable_recv_pipe": true, 00:18:08.647 "enable_quickack": false, 00:18:08.647 "enable_placement_id": 0, 00:18:08.647 "enable_zerocopy_send_server": false, 00:18:08.647 "enable_zerocopy_send_client": false, 00:18:08.647 "zerocopy_threshold": 0, 00:18:08.647 "tls_version": 0, 00:18:08.647 "enable_ktls": false 00:18:08.647 } 00:18:08.647 } 00:18:08.647 ] 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "subsystem": "vmd", 00:18:08.647 "config": [] 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "subsystem": "accel", 00:18:08.647 "config": [ 00:18:08.647 { 00:18:08.647 "method": "accel_set_options", 00:18:08.647 "params": { 00:18:08.647 "small_cache_size": 128, 00:18:08.647 "large_cache_size": 16, 00:18:08.647 "task_count": 2048, 00:18:08.647 "sequence_count": 2048, 00:18:08.647 "buf_count": 2048 00:18:08.647 } 00:18:08.647 } 00:18:08.647 ] 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "subsystem": "bdev", 00:18:08.647 "config": [ 00:18:08.647 { 00:18:08.647 "method": "bdev_set_options", 00:18:08.647 "params": { 00:18:08.647 "bdev_io_pool_size": 65535, 00:18:08.647 "bdev_io_cache_size": 256, 00:18:08.647 "bdev_auto_examine": true, 00:18:08.647 "iobuf_small_cache_size": 128, 00:18:08.647 "iobuf_large_cache_size": 16 00:18:08.647 } 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "method": "bdev_raid_set_options", 00:18:08.647 "params": { 00:18:08.647 "process_window_size_kb": 1024, 00:18:08.647 "process_max_bandwidth_mb_sec": 0 00:18:08.647 } 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "method": "bdev_iscsi_set_options", 00:18:08.647 "params": { 00:18:08.647 "timeout_sec": 30 00:18:08.647 } 00:18:08.647 }, 00:18:08.647 { 00:18:08.647 "method": "bdev_nvme_set_options", 00:18:08.647 "params": { 00:18:08.648 "action_on_timeout": "none", 00:18:08.648 "timeout_us": 0, 00:18:08.648 "timeout_admin_us": 0, 00:18:08.648 "keep_alive_timeout_ms": 10000, 00:18:08.648 "arbitration_burst": 0, 00:18:08.648 "low_priority_weight": 0, 00:18:08.648 "medium_priority_weight": 0, 00:18:08.648 "high_priority_weight": 0, 00:18:08.648 "nvme_adminq_poll_period_us": 10000, 00:18:08.648 "nvme_ioq_poll_period_us": 0, 00:18:08.648 "io_queue_requests": 0, 00:18:08.648 "delay_cmd_submit": true, 00:18:08.648 "transport_retry_count": 4, 00:18:08.648 "bdev_retry_count": 3, 00:18:08.648 "transport_ack_timeout": 0, 00:18:08.648 "ctrlr_loss_timeout_sec": 0, 00:18:08.648 "reconnect_delay_sec": 0, 00:18:08.648 "fast_io_fail_timeout_sec": 0, 00:18:08.648 "disable_auto_failback": false, 00:18:08.648 "generate_uuids": false, 00:18:08.648 "transport_tos": 0, 00:18:08.648 "nvme_error_stat": false, 00:18:08.648 "rdma_srq_size": 0, 00:18:08.648 "io_path_stat": false, 00:18:08.648 "allow_accel_sequence": false, 00:18:08.648 "rdma_max_cq_size": 0, 00:18:08.648 "rdma_cm_event_timeout_ms": 0, 00:18:08.648 "dhchap_digests": [ 00:18:08.648 "sha256", 00:18:08.648 "sha384", 00:18:08.648 "sha512" 00:18:08.648 ], 00:18:08.648 "dhchap_dhgroups": [ 00:18:08.648 "null", 00:18:08.648 "ffdhe2048", 00:18:08.648 "ffdhe3072", 00:18:08.648 "ffdhe4096", 00:18:08.648 "ffdhe6144", 00:18:08.648 "ffdhe8192" 00:18:08.648 ] 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "bdev_nvme_set_hotplug", 00:18:08.648 "params": { 00:18:08.648 "period_us": 100000, 00:18:08.648 "enable": false 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "bdev_malloc_create", 00:18:08.648 "params": { 00:18:08.648 "name": "malloc0", 00:18:08.648 "num_blocks": 8192, 00:18:08.648 "block_size": 4096, 00:18:08.648 "physical_block_size": 4096, 00:18:08.648 "uuid": "f4775269-e49b-4135-b370-ca66fd68c46a", 00:18:08.648 "optimal_io_boundary": 0, 00:18:08.648 "md_size": 0, 00:18:08.648 "dif_type": 0, 00:18:08.648 "dif_is_head_of_md": false, 00:18:08.648 "dif_pi_format": 0 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "bdev_wait_for_examine" 00:18:08.648 } 00:18:08.648 ] 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "subsystem": "nbd", 00:18:08.648 "config": [] 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "subsystem": "scheduler", 00:18:08.648 "config": [ 00:18:08.648 { 00:18:08.648 "method": "framework_set_scheduler", 00:18:08.648 "params": { 00:18:08.648 "name": "static" 00:18:08.648 } 00:18:08.648 } 00:18:08.648 ] 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "subsystem": "nvmf", 00:18:08.648 "config": [ 00:18:08.648 { 00:18:08.648 "method": "nvmf_set_config", 00:18:08.648 "params": { 00:18:08.648 "discovery_filter": "match_any", 00:18:08.648 "admin_cmd_passthru": { 00:18:08.648 "identify_ctrlr": false 00:18:08.648 }, 00:18:08.648 "dhchap_digests": [ 00:18:08.648 "sha256", 00:18:08.648 "sha384", 00:18:08.648 "sha512" 00:18:08.648 ], 00:18:08.648 "dhchap_dhgroups": [ 00:18:08.648 "null", 00:18:08.648 "ffdhe2048", 00:18:08.648 "ffdhe3072", 00:18:08.648 "ffdhe4096", 00:18:08.648 "ffdhe6144", 00:18:08.648 "ffdhe8192" 00:18:08.648 ] 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_set_max_subsystems", 00:18:08.648 "params": { 00:18:08.648 "max_subsystems": 1024 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_set_crdt", 00:18:08.648 "params": { 00:18:08.648 "crdt1": 0, 00:18:08.648 "crdt2": 0, 00:18:08.648 "crdt3": 0 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_create_transport", 00:18:08.648 "params": { 00:18:08.648 "trtype": "TCP", 00:18:08.648 "max_queue_depth": 128, 00:18:08.648 "max_io_qpairs_per_ctrlr": 127, 00:18:08.648 "in_capsule_data_size": 4096, 00:18:08.648 "max_io_size": 131072, 00:18:08.648 "io_unit_size": 131072, 00:18:08.648 "max_aq_depth": 128, 00:18:08.648 "num_shared_buffers": 511, 00:18:08.648 "buf_cache_size": 4294967295, 00:18:08.648 "dif_insert_or_strip": false, 00:18:08.648 "zcopy": false, 00:18:08.648 "c2h_success": false, 00:18:08.648 "sock_priority": 0, 00:18:08.648 "abort_timeout_sec": 1, 00:18:08.648 "ack_timeout": 0, 00:18:08.648 "data_wr_pool_size": 0 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_create_subsystem", 00:18:08.648 "params": { 00:18:08.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.648 "allow_any_host": false, 00:18:08.648 "serial_number": "00000000000000000000", 00:18:08.648 "model_number": "SPDK bdev Controller", 00:18:08.648 "max_namespaces": 32, 00:18:08.648 "min_cntlid": 1, 00:18:08.648 "max_cntlid": 65519, 00:18:08.648 "ana_reporting": false 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_subsystem_add_host", 00:18:08.648 "params": { 00:18:08.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.648 "host": "nqn.2016-06.io.spdk:host1", 00:18:08.648 "psk": "key0" 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_subsystem_add_ns", 00:18:08.648 "params": { 00:18:08.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.648 "namespace": { 00:18:08.648 "nsid": 1, 00:18:08.648 "bdev_name": "malloc0", 00:18:08.648 "nguid": "F4775269E49B4135B370CA66FD68C46A", 00:18:08.648 "uuid": "f4775269-e49b-4135-b370-ca66fd68c46a", 00:18:08.648 "no_auto_visible": false 00:18:08.648 } 00:18:08.648 } 00:18:08.648 }, 00:18:08.648 { 00:18:08.648 "method": "nvmf_subsystem_add_listener", 00:18:08.648 "params": { 00:18:08.648 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:08.648 "listen_address": { 00:18:08.648 "trtype": "TCP", 00:18:08.648 "adrfam": "IPv4", 00:18:08.648 "traddr": "10.0.0.3", 00:18:08.648 "trsvcid": "4420" 00:18:08.648 }, 00:18:08.648 "secure_channel": false, 00:18:08.648 "sock_impl": "ssl" 00:18:08.648 } 00:18:08.648 } 00:18:08.648 ] 00:18:08.648 } 00:18:08.648 ] 00:18:08.648 }' 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=75242 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 75242 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75242 ']' 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.648 01:38:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:08.648 [2024-11-17 01:38:17.099317] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:08.648 [2024-11-17 01:38:17.099482] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.963 [2024-11-17 01:38:17.283197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.963 [2024-11-17 01:38:17.372230] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:08.963 [2024-11-17 01:38:17.372303] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:08.963 [2024-11-17 01:38:17.372321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:08.963 [2024-11-17 01:38:17.372344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:08.963 [2024-11-17 01:38:17.372357] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:08.963 [2024-11-17 01:38:17.373556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.237 [2024-11-17 01:38:17.646087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:09.496 [2024-11-17 01:38:17.804497] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:09.497 [2024-11-17 01:38:17.836409] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:09.497 [2024-11-17 01:38:17.836725] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=75274 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 75274 /var/tmp/bdevperf.sock 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 75274 ']' 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.757 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:18:09.757 "subsystems": [ 00:18:09.757 { 00:18:09.757 "subsystem": "keyring", 00:18:09.757 "config": [ 00:18:09.757 { 00:18:09.757 "method": "keyring_file_add_key", 00:18:09.757 "params": { 00:18:09.757 "name": "key0", 00:18:09.757 "path": "/tmp/tmp.7aOSk2rSxl" 00:18:09.757 } 00:18:09.757 } 00:18:09.757 ] 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "subsystem": "iobuf", 00:18:09.757 "config": [ 00:18:09.757 { 00:18:09.757 "method": "iobuf_set_options", 00:18:09.757 "params": { 00:18:09.757 "small_pool_count": 8192, 00:18:09.757 "large_pool_count": 1024, 00:18:09.757 "small_bufsize": 8192, 00:18:09.757 "large_bufsize": 135168, 00:18:09.757 "enable_numa": false 00:18:09.757 } 00:18:09.757 } 00:18:09.757 ] 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "subsystem": "sock", 00:18:09.757 "config": [ 00:18:09.757 { 00:18:09.757 "method": "sock_set_default_impl", 00:18:09.757 "params": { 00:18:09.757 "impl_name": "uring" 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "sock_impl_set_options", 00:18:09.757 "params": { 00:18:09.757 "impl_name": "ssl", 00:18:09.757 "recv_buf_size": 4096, 00:18:09.757 "send_buf_size": 4096, 00:18:09.757 "enable_recv_pipe": true, 00:18:09.757 "enable_quickack": false, 00:18:09.757 "enable_placement_id": 0, 00:18:09.757 "enable_zerocopy_send_server": true, 00:18:09.757 "enable_zerocopy_send_client": false, 00:18:09.757 "zerocopy_threshold": 0, 00:18:09.757 "tls_version": 0, 00:18:09.757 "enable_ktls": false 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "sock_impl_set_options", 00:18:09.757 "params": { 00:18:09.757 "impl_name": "posix", 00:18:09.757 "recv_buf_size": 2097152, 00:18:09.757 "send_buf_size": 2097152, 00:18:09.757 "enable_recv_pipe": true, 00:18:09.757 "enable_quickack": false, 00:18:09.757 "enable_placement_id": 0, 00:18:09.757 "enable_zerocopy_send_server": true, 00:18:09.757 "enable_zerocopy_send_client": false, 00:18:09.757 "zerocopy_threshold": 0, 00:18:09.757 "tls_version": 0, 00:18:09.757 "enable_ktls": false 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "sock_impl_set_options", 00:18:09.757 "params": { 00:18:09.757 "impl_name": "uring", 00:18:09.757 "recv_buf_size": 2097152, 00:18:09.757 "send_buf_size": 2097152, 00:18:09.757 "enable_recv_pipe": true, 00:18:09.757 "enable_quickack": false, 00:18:09.757 "enable_placement_id": 0, 00:18:09.757 "enable_zerocopy_send_server": false, 00:18:09.757 "enable_zerocopy_send_client": false, 00:18:09.757 "zerocopy_threshold": 0, 00:18:09.757 "tls_version": 0, 00:18:09.757 "enable_ktls": false 00:18:09.757 } 00:18:09.757 } 00:18:09.757 ] 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "subsystem": "vmd", 00:18:09.757 "config": [] 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "subsystem": "accel", 00:18:09.757 "config": [ 00:18:09.757 { 00:18:09.757 "method": "accel_set_options", 00:18:09.757 "params": { 00:18:09.757 "small_cache_size": 128, 00:18:09.757 "large_cache_size": 16, 00:18:09.757 "task_count": 2048, 00:18:09.757 "sequence_count": 2048, 00:18:09.757 "buf_count": 2048 00:18:09.757 } 00:18:09.757 } 00:18:09.757 ] 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "subsystem": "bdev", 00:18:09.757 "config": [ 00:18:09.757 { 00:18:09.757 "method": "bdev_set_options", 00:18:09.757 "params": { 00:18:09.757 "bdev_io_pool_size": 65535, 00:18:09.757 "bdev_io_cache_size": 256, 00:18:09.757 "bdev_auto_examine": true, 00:18:09.757 "iobuf_small_cache_size": 128, 00:18:09.757 "iobuf_large_cache_size": 16 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "bdev_raid_set_options", 00:18:09.757 "params": { 00:18:09.757 "process_window_size_kb": 1024, 00:18:09.757 "process_max_bandwidth_mb_sec": 0 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "bdev_iscsi_set_options", 00:18:09.757 "params": { 00:18:09.757 "timeout_sec": 30 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "bdev_nvme_set_options", 00:18:09.757 "params": { 00:18:09.757 "action_on_timeout": "none", 00:18:09.757 "timeout_us": 0, 00:18:09.757 "timeout_admin_us": 0, 00:18:09.757 "keep_alive_timeout_ms": 10000, 00:18:09.757 "arbitration_burst": 0, 00:18:09.757 "low_priority_weight": 0, 00:18:09.757 "medium_priority_weight": 0, 00:18:09.757 "high_priority_weight": 0, 00:18:09.757 "nvme_adminq_poll_period_us": 10000, 00:18:09.757 "nvme_ioq_poll_period_us": 0, 00:18:09.757 "io_queue_requests": 512, 00:18:09.757 "delay_cmd_submit": true, 00:18:09.757 "transport_retry_count": 4, 00:18:09.757 "bdev_retry_count": 3, 00:18:09.757 "transport_ack_timeout": 0, 00:18:09.757 "ctrlr_loss_timeout_sec": 0, 00:18:09.757 "reconnect_delay_sec": 0, 00:18:09.757 "fast_io_fail_timeout_sec": 0, 00:18:09.757 "disable_auto_failback": false, 00:18:09.757 "generate_uuids": false, 00:18:09.757 "transport_tos": 0, 00:18:09.757 "nvme_error_stat": false, 00:18:09.757 "rdma_srq_size": 0, 00:18:09.757 "io_path_stat": false, 00:18:09.757 "allow_accel_sequence": false, 00:18:09.757 "rdma_max_cq_size": 0, 00:18:09.757 "rdma_cm_event_timeout_ms": 0, 00:18:09.757 "dhchap_digests": [ 00:18:09.757 "sha256", 00:18:09.757 "sha384", 00:18:09.757 "sha512" 00:18:09.757 ], 00:18:09.757 "dhchap_dhgroups": [ 00:18:09.757 "null", 00:18:09.757 "ffdhe2048", 00:18:09.757 "ffdhe3072", 00:18:09.757 "ffdhe4096", 00:18:09.757 "ffdhe6144", 00:18:09.757 "ffdhe8192" 00:18:09.757 ] 00:18:09.757 } 00:18:09.757 }, 00:18:09.757 { 00:18:09.757 "method": "bdev_nvme_attach_controller", 00:18:09.757 "params": { 00:18:09.757 "name": "nvme0", 00:18:09.757 "trtype": "TCP", 00:18:09.757 "adrfam": "IPv4", 00:18:09.757 "traddr": "10.0.0.3", 00:18:09.757 "trsvcid": "4420", 00:18:09.757 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.757 "prchk_reftag": false, 00:18:09.757 "prchk_guard": false, 00:18:09.757 "ctrlr_loss_timeout_sec": 0, 00:18:09.757 "reconnect_delay_sec": 0, 00:18:09.757 "fast_io_fail_timeout_sec": 0, 00:18:09.757 "psk": "key0", 00:18:09.757 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:09.757 "hdgst": false, 00:18:09.758 "ddgst": false, 00:18:09.758 "multipath": "multipath" 00:18:09.758 } 00:18:09.758 }, 00:18:09.758 { 00:18:09.758 "method": "bdev_nvme_set_hotplug", 00:18:09.758 "params": { 00:18:09.758 "period_us": 100000, 00:18:09.758 "enable": false 00:18:09.758 } 00:18:09.758 }, 00:18:09.758 { 00:18:09.758 "method": "bdev_enable_histogram", 00:18:09.758 "params": { 00:18:09.758 "name": "nvme0n1", 00:18:09.758 "enable": true 00:18:09.758 } 00:18:09.758 }, 00:18:09.758 { 00:18:09.758 "method": "bdev_wait_for_examine" 00:18:09.758 } 00:18:09.758 ] 00:18:09.758 }, 00:18:09.758 { 00:18:09.758 "subsystem": "nbd", 00:18:09.758 "config": [] 00:18:09.758 } 00:18:09.758 ] 00:18:09.758 }' 00:18:09.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:09.758 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:09.758 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.758 01:38:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:10.017 [2024-11-17 01:38:18.287558] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:10.017 [2024-11-17 01:38:18.287764] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75274 ] 00:18:10.017 [2024-11-17 01:38:18.474742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.277 [2024-11-17 01:38:18.600381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.536 [2024-11-17 01:38:18.862730] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:10.536 [2024-11-17 01:38:18.977736] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:10.795 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.795 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:10.795 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:10.795 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:18:11.361 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.361 01:38:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:11.361 Running I/O for 1 seconds... 00:18:12.299 2816.00 IOPS, 11.00 MiB/s 00:18:12.299 Latency(us) 00:18:12.299 [2024-11-17T01:38:20.758Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.299 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:12.299 Verification LBA range: start 0x0 length 0x2000 00:18:12.299 nvme0n1 : 1.04 2827.23 11.04 0.00 0.00 44632.44 8340.95 26810.18 00:18:12.299 [2024-11-17T01:38:20.758Z] =================================================================================================================== 00:18:12.299 [2024-11-17T01:38:20.758Z] Total : 2827.23 11.04 0.00 0.00 44632.44 8340.95 26810.18 00:18:12.299 { 00:18:12.299 "results": [ 00:18:12.299 { 00:18:12.299 "job": "nvme0n1", 00:18:12.299 "core_mask": "0x2", 00:18:12.299 "workload": "verify", 00:18:12.299 "status": "finished", 00:18:12.299 "verify_range": { 00:18:12.299 "start": 0, 00:18:12.299 "length": 8192 00:18:12.299 }, 00:18:12.299 "queue_depth": 128, 00:18:12.299 "io_size": 4096, 00:18:12.299 "runtime": 1.041303, 00:18:12.299 "iops": 2827.22704150473, 00:18:12.299 "mibps": 11.043855630877852, 00:18:12.299 "io_failed": 0, 00:18:12.299 "io_timeout": 0, 00:18:12.299 "avg_latency_us": 44632.43636363636, 00:18:12.299 "min_latency_us": 8340.945454545454, 00:18:12.299 "max_latency_us": 26810.18181818182 00:18:12.299 } 00:18:12.299 ], 00:18:12.299 "core_count": 1 00:18:12.299 } 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:12.299 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:12.299 nvmf_trace.0 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 75274 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75274 ']' 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75274 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75274 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.558 killing process with pid 75274 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75274' 00:18:12.558 Received shutdown signal, test time was about 1.000000 seconds 00:18:12.558 00:18:12.558 Latency(us) 00:18:12.558 [2024-11-17T01:38:21.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.558 [2024-11-17T01:38:21.017Z] =================================================================================================================== 00:18:12.558 [2024-11-17T01:38:21.017Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75274 00:18:12.558 01:38:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75274 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:13.496 rmmod nvme_tcp 00:18:13.496 rmmod nvme_fabrics 00:18:13.496 rmmod nvme_keyring 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 75242 ']' 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 75242 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 75242 ']' 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 75242 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75242 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:13.496 killing process with pid 75242 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75242' 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 75242 00:18:13.496 01:38:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 75242 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:14.431 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@300 -- # return 0 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.eElvJ8tHlH /tmp/tmp.S2LvEcDQDE /tmp/tmp.7aOSk2rSxl 00:18:14.690 00:18:14.690 real 1m44.567s 00:18:14.690 user 2m52.607s 00:18:14.690 sys 0m26.480s 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:14.690 ************************************ 00:18:14.690 END TEST nvmf_tls 00:18:14.690 01:38:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 ************************************ 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:14.690 ************************************ 00:18:14.690 START TEST nvmf_fips 00:18:14.690 ************************************ 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:14.690 * Looking for test storage... 00:18:14.690 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:14.690 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.951 --rc genhtml_branch_coverage=1 00:18:14.951 --rc genhtml_function_coverage=1 00:18:14.951 --rc genhtml_legend=1 00:18:14.951 --rc geninfo_all_blocks=1 00:18:14.951 --rc geninfo_unexecuted_blocks=1 00:18:14.951 00:18:14.951 ' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.951 --rc genhtml_branch_coverage=1 00:18:14.951 --rc genhtml_function_coverage=1 00:18:14.951 --rc genhtml_legend=1 00:18:14.951 --rc geninfo_all_blocks=1 00:18:14.951 --rc geninfo_unexecuted_blocks=1 00:18:14.951 00:18:14.951 ' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.951 --rc genhtml_branch_coverage=1 00:18:14.951 --rc genhtml_function_coverage=1 00:18:14.951 --rc genhtml_legend=1 00:18:14.951 --rc geninfo_all_blocks=1 00:18:14.951 --rc geninfo_unexecuted_blocks=1 00:18:14.951 00:18:14.951 ' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:14.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:14.951 --rc genhtml_branch_coverage=1 00:18:14.951 --rc genhtml_function_coverage=1 00:18:14.951 --rc genhtml_legend=1 00:18:14.951 --rc geninfo_all_blocks=1 00:18:14.951 --rc geninfo_unexecuted_blocks=1 00:18:14.951 00:18:14.951 ' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:14.951 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:14.951 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:14.952 Error setting digest 00:18:14.952 40A293E81C7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:14.952 40A293E81C7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:14.952 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:14.953 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:14.953 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:15.212 Cannot find device "nvmf_init_br" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:15.212 Cannot find device "nvmf_init_br2" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:15.212 Cannot find device "nvmf_tgt_br" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@164 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:15.212 Cannot find device "nvmf_tgt_br2" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@165 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:15.212 Cannot find device "nvmf_init_br" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@166 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:15.212 Cannot find device "nvmf_init_br2" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@167 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:15.212 Cannot find device "nvmf_tgt_br" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@168 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:15.212 Cannot find device "nvmf_tgt_br2" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:15.212 Cannot find device "nvmf_br" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@170 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:15.212 Cannot find device "nvmf_init_if" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:15.212 Cannot find device "nvmf_init_if2" 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:15.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@173 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:15.212 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@174 -- # true 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:15.212 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:15.471 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:15.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:15.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:18:15.472 00:18:15.472 --- 10.0.0.3 ping statistics --- 00:18:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.472 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:15.472 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:15.472 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:18:15.472 00:18:15.472 --- 10.0.0.4 ping statistics --- 00:18:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.472 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:15.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:15.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:15.472 00:18:15.472 --- 10.0.0.1 ping statistics --- 00:18:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.472 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:15.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:15.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.070 ms 00:18:15.472 00:18:15.472 --- 10.0.0.2 ping statistics --- 00:18:15.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:15.472 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@461 -- # return 0 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=75608 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 75608 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75608 ']' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.472 01:38:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:15.731 [2024-11-17 01:38:23.960617] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:15.731 [2024-11-17 01:38:23.960837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.731 [2024-11-17 01:38:24.150122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.990 [2024-11-17 01:38:24.273015] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.990 [2024-11-17 01:38:24.273088] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.990 [2024-11-17 01:38:24.273117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.990 [2024-11-17 01:38:24.273133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.990 [2024-11-17 01:38:24.273151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.990 [2024-11-17 01:38:24.274579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:16.249 [2024-11-17 01:38:24.470396] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.gTi 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.gTi 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.gTi 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.gTi 00:18:16.508 01:38:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:16.767 [2024-11-17 01:38:25.214159] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:17.026 [2024-11-17 01:38:25.230084] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:17.026 [2024-11-17 01:38:25.230375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:17.026 malloc0 00:18:17.026 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:17.026 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=75655 00:18:17.026 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:17.026 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 75655 /var/tmp/bdevperf.sock 00:18:17.026 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 75655 ']' 00:18:17.027 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:17.027 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:17.027 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:17.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:17.027 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:17.027 01:38:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:17.027 [2024-11-17 01:38:25.467304] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:17.027 [2024-11-17 01:38:25.467737] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75655 ] 00:18:17.286 [2024-11-17 01:38:25.655673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.545 [2024-11-17 01:38:25.781497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.545 [2024-11-17 01:38:25.955818] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:18.113 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:18.113 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:18.113 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.gTi 00:18:18.373 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:18.632 [2024-11-17 01:38:26.861442] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:18.632 TLSTESTn1 00:18:18.632 01:38:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:18.632 Running I/O for 10 seconds... 00:18:20.950 3072.00 IOPS, 12.00 MiB/s [2024-11-17T01:38:30.346Z] 3116.00 IOPS, 12.17 MiB/s [2024-11-17T01:38:31.283Z] 3138.33 IOPS, 12.26 MiB/s [2024-11-17T01:38:32.220Z] 3136.00 IOPS, 12.25 MiB/s [2024-11-17T01:38:33.157Z] 3167.80 IOPS, 12.37 MiB/s [2024-11-17T01:38:34.094Z] 3187.33 IOPS, 12.45 MiB/s [2024-11-17T01:38:35.473Z] 3192.57 IOPS, 12.47 MiB/s [2024-11-17T01:38:36.412Z] 3193.12 IOPS, 12.47 MiB/s [2024-11-17T01:38:37.349Z] 3194.89 IOPS, 12.48 MiB/s [2024-11-17T01:38:37.349Z] 3204.60 IOPS, 12.52 MiB/s 00:18:28.890 Latency(us) 00:18:28.890 [2024-11-17T01:38:37.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.890 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:28.890 Verification LBA range: start 0x0 length 0x2000 00:18:28.890 TLSTESTn1 : 10.02 3210.23 12.54 0.00 0.00 39796.60 7328.12 29908.25 00:18:28.890 [2024-11-17T01:38:37.349Z] =================================================================================================================== 00:18:28.890 [2024-11-17T01:38:37.349Z] Total : 3210.23 12.54 0.00 0.00 39796.60 7328.12 29908.25 00:18:28.890 { 00:18:28.890 "results": [ 00:18:28.890 { 00:18:28.890 "job": "TLSTESTn1", 00:18:28.890 "core_mask": "0x4", 00:18:28.890 "workload": "verify", 00:18:28.890 "status": "finished", 00:18:28.890 "verify_range": { 00:18:28.890 "start": 0, 00:18:28.890 "length": 8192 00:18:28.890 }, 00:18:28.890 "queue_depth": 128, 00:18:28.890 "io_size": 4096, 00:18:28.890 "runtime": 10.021103, 00:18:28.890 "iops": 3210.22546120921, 00:18:28.890 "mibps": 12.539943207848477, 00:18:28.890 "io_failed": 0, 00:18:28.890 "io_timeout": 0, 00:18:28.890 "avg_latency_us": 39796.59710153446, 00:18:28.890 "min_latency_us": 7328.1163636363635, 00:18:28.890 "max_latency_us": 29908.247272727273 00:18:28.890 } 00:18:28.890 ], 00:18:28.890 "core_count": 1 00:18:28.890 } 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:28.890 nvmf_trace.0 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:18:28.890 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 75655 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75655 ']' 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75655 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75655 00:18:28.891 killing process with pid 75655 00:18:28.891 Received shutdown signal, test time was about 10.000000 seconds 00:18:28.891 00:18:28.891 Latency(us) 00:18:28.891 [2024-11-17T01:38:37.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.891 [2024-11-17T01:38:37.350Z] =================================================================================================================== 00:18:28.891 [2024-11-17T01:38:37.350Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75655' 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75655 00:18:28.891 01:38:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75655 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:29.828 rmmod nvme_tcp 00:18:29.828 rmmod nvme_fabrics 00:18:29.828 rmmod nvme_keyring 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 75608 ']' 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 75608 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 75608 ']' 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 75608 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75608 00:18:29.828 killing process with pid 75608 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75608' 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 75608 00:18:29.828 01:38:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 75608 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:30.850 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@300 -- # return 0 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.gTi 00:18:31.110 ************************************ 00:18:31.110 END TEST nvmf_fips 00:18:31.110 ************************************ 00:18:31.110 00:18:31.110 real 0m16.415s 00:18:31.110 user 0m23.639s 00:18:31.110 sys 0m5.482s 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.110 ************************************ 00:18:31.110 START TEST nvmf_control_msg_list 00:18:31.110 ************************************ 00:18:31.110 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:18:31.110 * Looking for test storage... 00:18:31.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:31.370 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.371 --rc genhtml_branch_coverage=1 00:18:31.371 --rc genhtml_function_coverage=1 00:18:31.371 --rc genhtml_legend=1 00:18:31.371 --rc geninfo_all_blocks=1 00:18:31.371 --rc geninfo_unexecuted_blocks=1 00:18:31.371 00:18:31.371 ' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.371 --rc genhtml_branch_coverage=1 00:18:31.371 --rc genhtml_function_coverage=1 00:18:31.371 --rc genhtml_legend=1 00:18:31.371 --rc geninfo_all_blocks=1 00:18:31.371 --rc geninfo_unexecuted_blocks=1 00:18:31.371 00:18:31.371 ' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.371 --rc genhtml_branch_coverage=1 00:18:31.371 --rc genhtml_function_coverage=1 00:18:31.371 --rc genhtml_legend=1 00:18:31.371 --rc geninfo_all_blocks=1 00:18:31.371 --rc geninfo_unexecuted_blocks=1 00:18:31.371 00:18:31.371 ' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:31.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.371 --rc genhtml_branch_coverage=1 00:18:31.371 --rc genhtml_function_coverage=1 00:18:31.371 --rc genhtml_legend=1 00:18:31.371 --rc geninfo_all_blocks=1 00:18:31.371 --rc geninfo_unexecuted_blocks=1 00:18:31.371 00:18:31.371 ' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.371 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.372 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:31.372 Cannot find device "nvmf_init_br" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:31.372 Cannot find device "nvmf_init_br2" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:31.372 Cannot find device "nvmf_tgt_br" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@164 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:31.372 Cannot find device "nvmf_tgt_br2" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@165 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:31.372 Cannot find device "nvmf_init_br" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@166 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:31.372 Cannot find device "nvmf_init_br2" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@167 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:31.372 Cannot find device "nvmf_tgt_br" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@168 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:31.372 Cannot find device "nvmf_tgt_br2" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:31.372 Cannot find device "nvmf_br" 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@170 -- # true 00:18:31.372 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:31.632 Cannot find device "nvmf_init_if" 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # true 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:31.632 Cannot find device "nvmf_init_if2" 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # true 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:31.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@173 -- # true 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:31.632 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@174 -- # true 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:31.632 01:38:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:31.632 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:31.632 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:18:31.632 00:18:31.632 --- 10.0.0.3 ping statistics --- 00:18:31.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.632 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:31.632 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:31.632 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.055 ms 00:18:31.632 00:18:31.632 --- 10.0.0.4 ping statistics --- 00:18:31.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.632 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:31.632 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:31.632 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:18:31.632 00:18:31.632 --- 10.0.0.1 ping statistics --- 00:18:31.632 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.632 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:18:31.632 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:31.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:31.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:18:31.891 00:18:31.891 --- 10.0.0.2 ping statistics --- 00:18:31.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:31.891 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:18:31.891 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:31.891 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@461 -- # return 0 00:18:31.891 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:31.891 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:31.891 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:31.891 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=76056 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 76056 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 76056 ']' 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.892 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.892 01:38:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:31.892 [2024-11-17 01:38:40.245993] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:31.892 [2024-11-17 01:38:40.246163] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.150 [2024-11-17 01:38:40.430056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.150 [2024-11-17 01:38:40.528813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.151 [2024-11-17 01:38:40.528894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.151 [2024-11-17 01:38:40.528932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:32.151 [2024-11-17 01:38:40.528971] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:32.151 [2024-11-17 01:38:40.528996] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.151 [2024-11-17 01:38:40.531070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.410 [2024-11-17 01:38:40.743701] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.978 [2024-11-17 01:38:41.271067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.978 Malloc0 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:32.978 [2024-11-17 01:38:41.330853] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=76088 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=76089 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=76090 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:32.978 01:38:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 76088 00:18:33.237 [2024-11-17 01:38:41.585612] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:33.237 [2024-11-17 01:38:41.596886] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:33.237 [2024-11-17 01:38:41.597358] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:34.174 Initializing NVMe Controllers 00:18:34.174 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:34.174 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:18:34.174 Initialization complete. Launching workers. 00:18:34.174 ======================================================== 00:18:34.174 Latency(us) 00:18:34.174 Device Information : IOPS MiB/s Average min max 00:18:34.174 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 2915.00 11.39 342.60 162.73 1321.43 00:18:34.174 ======================================================== 00:18:34.174 Total : 2915.00 11.39 342.60 162.73 1321.43 00:18:34.174 00:18:34.174 Initializing NVMe Controllers 00:18:34.174 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:34.174 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:18:34.174 Initialization complete. Launching workers. 00:18:34.174 ======================================================== 00:18:34.174 Latency(us) 00:18:34.174 Device Information : IOPS MiB/s Average min max 00:18:34.174 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 2906.00 11.35 343.59 234.30 662.20 00:18:34.174 ======================================================== 00:18:34.174 Total : 2906.00 11.35 343.59 234.30 662.20 00:18:34.174 00:18:34.174 Initializing NVMe Controllers 00:18:34.174 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:34.174 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:18:34.174 Initialization complete. Launching workers. 00:18:34.174 ======================================================== 00:18:34.174 Latency(us) 00:18:34.174 Device Information : IOPS MiB/s Average min max 00:18:34.174 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 2901.00 11.33 344.22 234.26 882.48 00:18:34.174 ======================================================== 00:18:34.174 Total : 2901.00 11.33 344.22 234.26 882.48 00:18:34.174 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 76089 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 76090 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:34.433 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:34.433 rmmod nvme_tcp 00:18:34.433 rmmod nvme_fabrics 00:18:34.434 rmmod nvme_keyring 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 76056 ']' 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 76056 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 76056 ']' 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 76056 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76056 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.434 killing process with pid 76056 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76056' 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 76056 00:18:34.434 01:38:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 76056 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:35.370 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@300 -- # return 0 00:18:35.629 00:18:35.629 real 0m4.483s 00:18:35.629 user 0m6.690s 00:18:35.629 sys 0m1.560s 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:35.629 01:38:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:18:35.629 ************************************ 00:18:35.629 END TEST nvmf_control_msg_list 00:18:35.629 ************************************ 00:18:35.629 01:38:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:35.629 01:38:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:35.629 01:38:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:35.629 01:38:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:35.629 ************************************ 00:18:35.629 START TEST nvmf_wait_for_buf 00:18:35.629 ************************************ 00:18:35.629 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:18:35.889 * Looking for test storage... 00:18:35.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.889 --rc genhtml_branch_coverage=1 00:18:35.889 --rc genhtml_function_coverage=1 00:18:35.889 --rc genhtml_legend=1 00:18:35.889 --rc geninfo_all_blocks=1 00:18:35.889 --rc geninfo_unexecuted_blocks=1 00:18:35.889 00:18:35.889 ' 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.889 --rc genhtml_branch_coverage=1 00:18:35.889 --rc genhtml_function_coverage=1 00:18:35.889 --rc genhtml_legend=1 00:18:35.889 --rc geninfo_all_blocks=1 00:18:35.889 --rc geninfo_unexecuted_blocks=1 00:18:35.889 00:18:35.889 ' 00:18:35.889 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:35.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.889 --rc genhtml_branch_coverage=1 00:18:35.889 --rc genhtml_function_coverage=1 00:18:35.889 --rc genhtml_legend=1 00:18:35.889 --rc geninfo_all_blocks=1 00:18:35.889 --rc geninfo_unexecuted_blocks=1 00:18:35.889 00:18:35.889 ' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:35.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:35.890 --rc genhtml_branch_coverage=1 00:18:35.890 --rc genhtml_function_coverage=1 00:18:35.890 --rc genhtml_legend=1 00:18:35.890 --rc geninfo_all_blocks=1 00:18:35.890 --rc geninfo_unexecuted_blocks=1 00:18:35.890 00:18:35.890 ' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:35.890 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:35.890 Cannot find device "nvmf_init_br" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # true 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:35.890 Cannot find device "nvmf_init_br2" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # true 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:35.890 Cannot find device "nvmf_tgt_br" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@164 -- # true 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:35.890 Cannot find device "nvmf_tgt_br2" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@165 -- # true 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:35.890 Cannot find device "nvmf_init_br" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@166 -- # true 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:35.890 Cannot find device "nvmf_init_br2" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@167 -- # true 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:35.890 Cannot find device "nvmf_tgt_br" 00:18:35.890 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@168 -- # true 00:18:35.891 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:36.165 Cannot find device "nvmf_tgt_br2" 00:18:36.165 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # true 00:18:36.165 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:36.165 Cannot find device "nvmf_br" 00:18:36.165 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@170 -- # true 00:18:36.165 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:36.165 Cannot find device "nvmf_init_if" 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # true 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:36.166 Cannot find device "nvmf_init_if2" 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # true 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:36.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@173 -- # true 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:36.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@174 -- # true 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:36.166 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:36.167 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:36.431 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:36.431 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:36.431 00:18:36.431 --- 10.0.0.3 ping statistics --- 00:18:36.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.431 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:36.431 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:36.431 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:18:36.431 00:18:36.431 --- 10.0.0.4 ping statistics --- 00:18:36.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.431 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:36.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:18:36.431 00:18:36.431 --- 10.0.0.1 ping statistics --- 00:18:36.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.431 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:36.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.050 ms 00:18:36.431 00:18:36.431 --- 10.0.0.2 ping statistics --- 00:18:36.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.431 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@461 -- # return 0 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:36.431 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=76339 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 76339 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 76339 ']' 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:36.432 01:38:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:36.432 [2024-11-17 01:38:44.780924] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:36.432 [2024-11-17 01:38:44.781675] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:36.690 [2024-11-17 01:38:44.958242] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.690 [2024-11-17 01:38:45.041818] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:36.690 [2024-11-17 01:38:45.041903] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:36.690 [2024-11-17 01:38:45.041938] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:36.690 [2024-11-17 01:38:45.041960] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:36.690 [2024-11-17 01:38:45.041974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:36.690 [2024-11-17 01:38:45.043099] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:45 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 [2024-11-17 01:38:45.938720] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 Malloc0 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 [2024-11-17 01:38:46.070684] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.628 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.887 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.887 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:18:37.887 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.887 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:37.887 [2024-11-17 01:38:46.094879] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:37.887 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.887 01:38:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:18:37.887 [2024-11-17 01:38:46.324987] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.3/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:18:39.265 Initializing NVMe Controllers 00:18:39.265 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2024-07.io.spdk:cnode0 00:18:39.265 Associating TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:18:39.265 Initialization complete. Launching workers. 00:18:39.265 ======================================================== 00:18:39.265 Latency(us) 00:18:39.265 Device Information : IOPS MiB/s Average min max 00:18:39.265 TCP (addr:10.0.0.3 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 500.99 62.62 7997.90 5926.26 10072.81 00:18:39.265 ======================================================== 00:18:39.265 Total : 500.99 62.62 7997.90 5926.26 10072.81 00:18:39.265 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=4750 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 4750 -eq 0 ]] 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:39.265 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:39.524 rmmod nvme_tcp 00:18:39.524 rmmod nvme_fabrics 00:18:39.524 rmmod nvme_keyring 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 76339 ']' 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 76339 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 76339 ']' 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 76339 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76339 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:39.524 killing process with pid 76339 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76339' 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 76339 00:18:39.524 01:38:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 76339 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:40.460 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@300 -- # return 0 00:18:40.461 00:18:40.461 real 0m4.820s 00:18:40.461 user 0m4.364s 00:18:40.461 sys 0m0.922s 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:18:40.461 ************************************ 00:18:40.461 END TEST nvmf_wait_for_buf 00:18:40.461 ************************************ 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:40.461 ************************************ 00:18:40.461 START TEST nvmf_fuzz 00:18:40.461 ************************************ 00:18:40.461 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:18:40.721 * Looking for test storage... 00:18:40.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:40.721 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:40.721 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:18:40.721 01:38:48 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:18:40.721 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:40.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.722 --rc genhtml_branch_coverage=1 00:18:40.722 --rc genhtml_function_coverage=1 00:18:40.722 --rc genhtml_legend=1 00:18:40.722 --rc geninfo_all_blocks=1 00:18:40.722 --rc geninfo_unexecuted_blocks=1 00:18:40.722 00:18:40.722 ' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:40.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.722 --rc genhtml_branch_coverage=1 00:18:40.722 --rc genhtml_function_coverage=1 00:18:40.722 --rc genhtml_legend=1 00:18:40.722 --rc geninfo_all_blocks=1 00:18:40.722 --rc geninfo_unexecuted_blocks=1 00:18:40.722 00:18:40.722 ' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:40.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.722 --rc genhtml_branch_coverage=1 00:18:40.722 --rc genhtml_function_coverage=1 00:18:40.722 --rc genhtml_legend=1 00:18:40.722 --rc geninfo_all_blocks=1 00:18:40.722 --rc geninfo_unexecuted_blocks=1 00:18:40.722 00:18:40.722 ' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:40.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:40.722 --rc genhtml_branch_coverage=1 00:18:40.722 --rc genhtml_function_coverage=1 00:18:40.722 --rc genhtml_legend=1 00:18:40.722 --rc geninfo_all_blocks=1 00:18:40.722 --rc geninfo_unexecuted_blocks=1 00:18:40.722 00:18:40.722 ' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:40.722 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:40.722 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:40.723 Cannot find device "nvmf_init_br" 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@162 -- # true 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:40.723 Cannot find device "nvmf_init_br2" 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@163 -- # true 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:40.723 Cannot find device "nvmf_tgt_br" 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@164 -- # true 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:40.723 Cannot find device "nvmf_tgt_br2" 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@165 -- # true 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:40.723 Cannot find device "nvmf_init_br" 00:18:40.723 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@166 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:40.981 Cannot find device "nvmf_init_br2" 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@167 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:40.981 Cannot find device "nvmf_tgt_br" 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@168 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:40.981 Cannot find device "nvmf_tgt_br2" 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@169 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:40.981 Cannot find device "nvmf_br" 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@170 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:40.981 Cannot find device "nvmf_init_if" 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@171 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:40.981 Cannot find device "nvmf_init_if2" 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@172 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:40.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@173 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:40.981 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@174 -- # true 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:40.981 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:41.239 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:41.239 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:18:41.239 00:18:41.239 --- 10.0.0.3 ping statistics --- 00:18:41.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.239 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:41.239 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:41.239 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.094 ms 00:18:41.239 00:18:41.239 --- 10.0.0.4 ping statistics --- 00:18:41.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.239 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:18:41.239 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:41.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:41.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:18:41.240 00:18:41.240 --- 10.0.0.1 ping statistics --- 00:18:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.240 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:41.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:41.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:18:41.240 00:18:41.240 --- 10.0.0.2 ping statistics --- 00:18:41.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:41.240 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@461 -- # return 0 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=76638 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 76638 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 76638 ']' 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:41.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:41.240 01:38:49 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.616 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.617 Malloc0 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' 00:18:42.617 01:38:50 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -N -a 00:18:42.876 Shutting down the fuzz application 00:18:42.876 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.3 trsvcid:4420' -j /home/vagrant/spdk_repo/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:18:43.444 Shutting down the fuzz application 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:43.444 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:43.702 rmmod nvme_tcp 00:18:43.702 rmmod nvme_fabrics 00:18:43.702 rmmod nvme_keyring 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 76638 ']' 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 76638 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 76638 ']' 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 76638 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.702 01:38:51 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76638 00:18:43.703 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.703 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.703 killing process with pid 76638 00:18:43.703 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76638' 00:18:43.703 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 76638 00:18:43.703 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 76638 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:18:44.639 01:38:52 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:18:44.639 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@246 -- # remove_spdk_ns 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@300 -- # return 0 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs1.txt /home/vagrant/spdk_repo/spdk/../output/nvmf_fuzz_logs2.txt 00:18:44.898 00:18:44.898 real 0m4.346s 00:18:44.898 user 0m4.577s 00:18:44.898 sys 0m0.895s 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:44.898 ************************************ 00:18:44.898 END TEST nvmf_fuzz 00:18:44.898 ************************************ 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:44.898 ************************************ 00:18:44.898 START TEST nvmf_multiconnection 00:18:44.898 ************************************ 00:18:44.898 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:18:45.158 * Looking for test storage... 00:18:45.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lcov --version 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:45.158 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.159 --rc genhtml_branch_coverage=1 00:18:45.159 --rc genhtml_function_coverage=1 00:18:45.159 --rc genhtml_legend=1 00:18:45.159 --rc geninfo_all_blocks=1 00:18:45.159 --rc geninfo_unexecuted_blocks=1 00:18:45.159 00:18:45.159 ' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.159 --rc genhtml_branch_coverage=1 00:18:45.159 --rc genhtml_function_coverage=1 00:18:45.159 --rc genhtml_legend=1 00:18:45.159 --rc geninfo_all_blocks=1 00:18:45.159 --rc geninfo_unexecuted_blocks=1 00:18:45.159 00:18:45.159 ' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.159 --rc genhtml_branch_coverage=1 00:18:45.159 --rc genhtml_function_coverage=1 00:18:45.159 --rc genhtml_legend=1 00:18:45.159 --rc geninfo_all_blocks=1 00:18:45.159 --rc geninfo_unexecuted_blocks=1 00:18:45.159 00:18:45.159 ' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:45.159 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:45.159 --rc genhtml_branch_coverage=1 00:18:45.159 --rc genhtml_function_coverage=1 00:18:45.159 --rc genhtml_legend=1 00:18:45.159 --rc geninfo_all_blocks=1 00:18:45.159 --rc geninfo_unexecuted_blocks=1 00:18:45.159 00:18:45.159 ' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.159 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:45.160 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@460 -- # nvmf_veth_init 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:18:45.160 Cannot find device "nvmf_init_br" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@162 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:18:45.160 Cannot find device "nvmf_init_br2" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@163 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:18:45.160 Cannot find device "nvmf_tgt_br" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@164 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:18:45.160 Cannot find device "nvmf_tgt_br2" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@165 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:18:45.160 Cannot find device "nvmf_init_br" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@166 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:18:45.160 Cannot find device "nvmf_init_br2" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@167 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:18:45.160 Cannot find device "nvmf_tgt_br" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@168 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:18:45.160 Cannot find device "nvmf_tgt_br2" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@169 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:18:45.160 Cannot find device "nvmf_br" 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@170 -- # true 00:18:45.160 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:18:45.420 Cannot find device "nvmf_init_if" 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@171 -- # true 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:18:45.420 Cannot find device "nvmf_init_if2" 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@172 -- # true 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:18:45.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@173 -- # true 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:18:45.420 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@174 -- # true 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:18:45.420 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:18:45.420 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:18:45.420 00:18:45.420 --- 10.0.0.3 ping statistics --- 00:18:45.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.420 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:18:45.420 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:18:45.420 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.044 ms 00:18:45.420 00:18:45.420 --- 10.0.0.4 ping statistics --- 00:18:45.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.420 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:18:45.420 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:18:45.679 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:45.679 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:18:45.679 00:18:45.679 --- 10.0.0.1 ping statistics --- 00:18:45.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.679 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:18:45.679 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:45.679 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:18:45.679 00:18:45.679 --- 10.0.0.2 ping statistics --- 00:18:45.679 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:45.679 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@461 -- # return 0 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:45.679 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=76902 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 76902 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 76902 ']' 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:45.679 01:38:53 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:45.679 [2024-11-17 01:38:54.040223] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:18:45.679 [2024-11-17 01:38:54.040394] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:45.938 [2024-11-17 01:38:54.227385] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:45.938 [2024-11-17 01:38:54.355543] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:45.938 [2024-11-17 01:38:54.355639] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:45.938 [2024-11-17 01:38:54.355664] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:45.938 [2024-11-17 01:38:54.355678] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:45.938 [2024-11-17 01:38:54.355695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:45.938 [2024-11-17 01:38:54.357878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:45.938 [2024-11-17 01:38:54.358024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:45.938 [2024-11-17 01:38:54.358182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:45.938 [2024-11-17 01:38:54.358229] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.197 [2024-11-17 01:38:54.534952] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 [2024-11-17 01:38:55.050209] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 Malloc1 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:46.765 [2024-11-17 01:38:55.168190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.765 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 Malloc2 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 Malloc3 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 Malloc4 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.3 -s 4420 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.024 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.025 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:18:47.025 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.025 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.283 Malloc5 00:18:47.283 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.3 -s 4420 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 Malloc6 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.3 -s 4420 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 Malloc7 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.3 -s 4420 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.284 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.543 Malloc8 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.3 -s 4420 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.543 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 Malloc9 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.3 -s 4420 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 Malloc10 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.3 -s 4420 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.544 01:38:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.803 Malloc11 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.3 -s 4420 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:47.803 01:38:56 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.3 -s 4420 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:18:50.332 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:50.333 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:50.333 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:50.333 01:38:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.3 -s 4420 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:52.231 01:39:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:54.130 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.3 -s 4420 00:18:54.389 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:18:54.389 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:54.389 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.389 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:54.389 01:39:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:56.290 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.3 -s 4420 00:18:56.548 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:18:56.548 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:56.548 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:56.548 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:56.548 01:39:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:18:58.451 01:39:06 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.3 -s 4420 00:18:58.710 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:18:58.710 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:18:58.710 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:58.710 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:18:58.710 01:39:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:00.642 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.3 -s 4420 00:19:00.901 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:19:00.901 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:00.901 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:00.901 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:00.901 01:39:09 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:02.805 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.3 -s 4420 00:19:03.064 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:19:03.064 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:03.064 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:03.064 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:03.064 01:39:11 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:04.966 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.3 -s 4420 00:19:05.225 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:19:05.225 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:05.225 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:05.225 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:05.225 01:39:13 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:07.129 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:07.129 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:07.129 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.3 -s 4420 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:07.388 01:39:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:09.918 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.3 -s 4420 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:09.919 01:39:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:19:11.819 01:39:19 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:19:11.819 [global] 00:19:11.819 thread=1 00:19:11.819 invalidate=1 00:19:11.819 rw=read 00:19:11.819 time_based=1 00:19:11.819 runtime=10 00:19:11.819 ioengine=libaio 00:19:11.819 direct=1 00:19:11.819 bs=262144 00:19:11.819 iodepth=64 00:19:11.819 norandommap=1 00:19:11.819 numjobs=1 00:19:11.819 00:19:11.819 [job0] 00:19:11.819 filename=/dev/nvme0n1 00:19:11.819 [job1] 00:19:11.819 filename=/dev/nvme10n1 00:19:11.819 [job2] 00:19:11.819 filename=/dev/nvme1n1 00:19:11.819 [job3] 00:19:11.819 filename=/dev/nvme2n1 00:19:11.819 [job4] 00:19:11.819 filename=/dev/nvme3n1 00:19:11.819 [job5] 00:19:11.819 filename=/dev/nvme4n1 00:19:11.819 [job6] 00:19:11.819 filename=/dev/nvme5n1 00:19:11.819 [job7] 00:19:11.819 filename=/dev/nvme6n1 00:19:11.819 [job8] 00:19:11.819 filename=/dev/nvme7n1 00:19:11.819 [job9] 00:19:11.819 filename=/dev/nvme8n1 00:19:11.819 [job10] 00:19:11.819 filename=/dev/nvme9n1 00:19:11.819 Could not set queue depth (nvme0n1) 00:19:11.819 Could not set queue depth (nvme10n1) 00:19:11.819 Could not set queue depth (nvme1n1) 00:19:11.819 Could not set queue depth (nvme2n1) 00:19:11.819 Could not set queue depth (nvme3n1) 00:19:11.819 Could not set queue depth (nvme4n1) 00:19:11.819 Could not set queue depth (nvme5n1) 00:19:11.819 Could not set queue depth (nvme6n1) 00:19:11.819 Could not set queue depth (nvme7n1) 00:19:11.819 Could not set queue depth (nvme8n1) 00:19:11.819 Could not set queue depth (nvme9n1) 00:19:11.819 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:11.819 fio-3.35 00:19:11.819 Starting 11 threads 00:19:24.028 00:19:24.028 job0: (groupid=0, jobs=1): err= 0: pid=77358: Sun Nov 17 01:39:30 2024 00:19:24.028 read: IOPS=166, BW=41.7MiB/s (43.8MB/s)(422MiB/10119msec) 00:19:24.028 slat (usec): min=19, max=182934, avg=5927.38, stdev=14903.25 00:19:24.028 clat (msec): min=23, max=667, avg=376.88, stdev=63.44 00:19:24.028 lat (msec): min=24, max=667, avg=382.81, stdev=64.35 00:19:24.028 clat percentiles (msec): 00:19:24.028 | 1.00th=[ 142], 5.00th=[ 305], 10.00th=[ 330], 20.00th=[ 347], 00:19:24.029 | 30.00th=[ 363], 40.00th=[ 372], 50.00th=[ 380], 60.00th=[ 393], 00:19:24.029 | 70.00th=[ 405], 80.00th=[ 418], 90.00th=[ 439], 95.00th=[ 464], 00:19:24.029 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 523], 99.95th=[ 667], 00:19:24.029 | 99.99th=[ 667] 00:19:24.029 bw ( KiB/s): min=32256, max=47616, per=5.34%, avg=41587.55, stdev=3562.34, samples=20 00:19:24.029 iops : min= 126, max= 186, avg=162.35, stdev=13.93, samples=20 00:19:24.029 lat (msec) : 50=0.06%, 100=0.71%, 250=3.26%, 500=95.03%, 750=0.95% 00:19:24.029 cpu : usr=0.07%, sys=0.86%, ctx=365, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=1689,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job1: (groupid=0, jobs=1): err= 0: pid=77359: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=206, BW=51.7MiB/s (54.2MB/s)(521MiB/10072msec) 00:19:24.029 slat (usec): min=19, max=171074, avg=4592.02, stdev=12825.76 00:19:24.029 clat (msec): min=7, max=505, avg=304.15, stdev=120.49 00:19:24.029 lat (msec): min=10, max=575, avg=308.74, stdev=122.51 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 28], 5.00th=[ 80], 10.00th=[ 146], 20.00th=[ 171], 00:19:24.029 | 30.00th=[ 188], 40.00th=[ 334], 50.00th=[ 359], 60.00th=[ 376], 00:19:24.029 | 70.00th=[ 388], 80.00th=[ 405], 90.00th=[ 426], 95.00th=[ 443], 00:19:24.029 | 99.00th=[ 468], 99.50th=[ 472], 99.90th=[ 502], 99.95th=[ 502], 00:19:24.029 | 99.99th=[ 506] 00:19:24.029 bw ( KiB/s): min=33280, max=101888, per=6.65%, avg=51749.70, stdev=21456.10, samples=20 00:19:24.029 iops : min= 130, max= 398, avg=202.05, stdev=83.82, samples=20 00:19:24.029 lat (msec) : 10=0.05%, 20=0.34%, 50=1.39%, 100=4.61%, 250=28.69% 00:19:24.029 lat (msec) : 500=64.78%, 750=0.14% 00:19:24.029 cpu : usr=0.10%, sys=0.89%, ctx=466, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=2084,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job2: (groupid=0, jobs=1): err= 0: pid=77360: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=88, BW=22.2MiB/s (23.3MB/s)(226MiB/10172msec) 00:19:24.029 slat (usec): min=22, max=381036, avg=11074.99, stdev=32931.62 00:19:24.029 clat (msec): min=160, max=1029, avg=709.04, stdev=181.35 00:19:24.029 lat (msec): min=160, max=1029, avg=720.11, stdev=183.46 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 165], 5.00th=[ 249], 10.00th=[ 422], 20.00th=[ 642], 00:19:24.029 | 30.00th=[ 693], 40.00th=[ 718], 50.00th=[ 735], 60.00th=[ 768], 00:19:24.029 | 70.00th=[ 802], 80.00th=[ 844], 90.00th=[ 885], 95.00th=[ 911], 00:19:24.029 | 99.00th=[ 961], 99.50th=[ 961], 99.90th=[ 1028], 99.95th=[ 1028], 00:19:24.029 | 99.99th=[ 1028] 00:19:24.029 bw ( KiB/s): min=13312, max=32256, per=2.76%, avg=21499.05, stdev=4788.53, samples=20 00:19:24.029 iops : min= 52, max= 126, avg=83.85, stdev=18.69, samples=20 00:19:24.029 lat (msec) : 250=6.64%, 500=4.10%, 750=43.52%, 1000=45.63%, 2000=0.11% 00:19:24.029 cpu : usr=0.04%, sys=0.52%, ctx=170, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.5%, >=64=93.0% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=903,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job3: (groupid=0, jobs=1): err= 0: pid=77362: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=83, BW=20.8MiB/s (21.9MB/s)(212MiB/10173msec) 00:19:24.029 slat (usec): min=19, max=359700, avg=11794.50, stdev=35641.38 00:19:24.029 clat (msec): min=121, max=1117, avg=755.00, stdev=214.35 00:19:24.029 lat (msec): min=121, max=1170, avg=766.79, stdev=215.76 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 142], 5.00th=[ 351], 10.00th=[ 388], 20.00th=[ 609], 00:19:24.029 | 30.00th=[ 676], 40.00th=[ 718], 50.00th=[ 776], 60.00th=[ 852], 00:19:24.029 | 70.00th=[ 894], 80.00th=[ 953], 90.00th=[ 986], 95.00th=[ 1011], 00:19:24.029 | 99.00th=[ 1083], 99.50th=[ 1099], 99.90th=[ 1116], 99.95th=[ 1116], 00:19:24.029 | 99.99th=[ 1116] 00:19:24.029 bw ( KiB/s): min= 7680, max=32320, per=2.58%, avg=20095.40, stdev=8031.29, samples=20 00:19:24.029 iops : min= 30, max= 126, avg=78.35, stdev=31.35, samples=20 00:19:24.029 lat (msec) : 250=3.07%, 500=9.08%, 750=34.55%, 1000=46.11%, 2000=7.19% 00:19:24.029 cpu : usr=0.04%, sys=0.40%, ctx=183, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=0.9%, 16=1.9%, 32=3.8%, >=64=92.6% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job4: (groupid=0, jobs=1): err= 0: pid=77365: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=81, BW=20.3MiB/s (21.3MB/s)(206MiB/10162msec) 00:19:24.029 slat (usec): min=19, max=311687, avg=12137.47, stdev=36512.75 00:19:24.029 clat (msec): min=156, max=1158, avg=776.13, stdev=163.88 00:19:24.029 lat (msec): min=226, max=1158, avg=788.26, stdev=163.43 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 228], 5.00th=[ 481], 10.00th=[ 592], 20.00th=[ 659], 00:19:24.029 | 30.00th=[ 693], 40.00th=[ 718], 50.00th=[ 768], 60.00th=[ 835], 00:19:24.029 | 70.00th=[ 885], 80.00th=[ 927], 90.00th=[ 969], 95.00th=[ 995], 00:19:24.029 | 99.00th=[ 1083], 99.50th=[ 1083], 99.90th=[ 1167], 99.95th=[ 1167], 00:19:24.029 | 99.99th=[ 1167] 00:19:24.029 bw ( KiB/s): min=11776, max=25600, per=2.50%, avg=19480.30, stdev=4511.82, samples=20 00:19:24.029 iops : min= 46, max= 100, avg=75.95, stdev=17.72, samples=20 00:19:24.029 lat (msec) : 250=1.58%, 500=4.49%, 750=39.93%, 1000=49.51%, 2000=4.49% 00:19:24.029 cpu : usr=0.01%, sys=0.41%, ctx=177, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.2%, 4=0.5%, 8=1.0%, 16=1.9%, 32=3.9%, >=64=92.4% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=824,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job5: (groupid=0, jobs=1): err= 0: pid=77369: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=551, BW=138MiB/s (145MB/s)(1381MiB/10014msec) 00:19:24.029 slat (usec): min=18, max=51954, avg=1805.95, stdev=5002.69 00:19:24.029 clat (msec): min=12, max=260, avg=114.12, stdev=67.81 00:19:24.029 lat (msec): min=14, max=260, avg=115.92, stdev=68.82 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 39], 5.00th=[ 41], 10.00th=[ 43], 20.00th=[ 44], 00:19:24.029 | 30.00th=[ 46], 40.00th=[ 47], 50.00th=[ 142], 60.00th=[ 167], 00:19:24.029 | 70.00th=[ 174], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 201], 00:19:24.029 | 99.00th=[ 226], 99.50th=[ 241], 99.90th=[ 262], 99.95th=[ 262], 00:19:24.029 | 99.99th=[ 262] 00:19:24.029 bw ( KiB/s): min=85504, max=367104, per=17.95%, avg=139751.60, stdev=102584.89, samples=20 00:19:24.029 iops : min= 334, max= 1434, avg=545.70, stdev=400.80, samples=20 00:19:24.029 lat (msec) : 20=0.20%, 50=44.84%, 100=2.61%, 250=52.10%, 500=0.25% 00:19:24.029 cpu : usr=0.34%, sys=2.01%, ctx=1145, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=5522,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job6: (groupid=0, jobs=1): err= 0: pid=77370: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=314, BW=78.6MiB/s (82.5MB/s)(796MiB/10116msec) 00:19:24.029 slat (usec): min=19, max=280337, avg=3110.45, stdev=9622.86 00:19:24.029 clat (msec): min=25, max=562, avg=200.06, stdev=74.69 00:19:24.029 lat (msec): min=25, max=652, avg=203.17, stdev=75.41 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 130], 5.00th=[ 146], 10.00th=[ 157], 20.00th=[ 165], 00:19:24.029 | 30.00th=[ 171], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:19:24.029 | 70.00th=[ 190], 80.00th=[ 199], 90.00th=[ 245], 95.00th=[ 397], 00:19:24.029 | 99.00th=[ 542], 99.50th=[ 550], 99.90th=[ 558], 99.95th=[ 567], 00:19:24.029 | 99.99th=[ 567] 00:19:24.029 bw ( KiB/s): min=34746, max=94720, per=10.25%, avg=79809.30, stdev=20502.21, samples=20 00:19:24.029 iops : min= 135, max= 370, avg=311.65, stdev=80.22, samples=20 00:19:24.029 lat (msec) : 50=0.03%, 250=90.07%, 500=8.45%, 750=1.45% 00:19:24.029 cpu : usr=0.20%, sys=1.32%, ctx=644, majf=0, minf=4097 00:19:24.029 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.0%, >=64=98.0% 00:19:24.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.029 issued rwts: total=3182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.029 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.029 job7: (groupid=0, jobs=1): err= 0: pid=77371: Sun Nov 17 01:39:30 2024 00:19:24.029 read: IOPS=107, BW=26.9MiB/s (28.2MB/s)(273MiB/10163msec) 00:19:24.029 slat (usec): min=19, max=373106, avg=8688.75, stdev=31879.22 00:19:24.029 clat (msec): min=11, max=1059, avg=586.15, stdev=331.54 00:19:24.029 lat (msec): min=12, max=1059, avg=594.84, stdev=336.10 00:19:24.029 clat percentiles (msec): 00:19:24.029 | 1.00th=[ 69], 5.00th=[ 109], 10.00th=[ 127], 20.00th=[ 169], 00:19:24.029 | 30.00th=[ 197], 40.00th=[ 625], 50.00th=[ 693], 60.00th=[ 768], 00:19:24.029 | 70.00th=[ 835], 80.00th=[ 911], 90.00th=[ 969], 95.00th=[ 995], 00:19:24.029 | 99.00th=[ 1045], 99.50th=[ 1062], 99.90th=[ 1062], 99.95th=[ 1062], 00:19:24.029 | 99.99th=[ 1062] 00:19:24.029 bw ( KiB/s): min= 6144, max=97596, per=3.38%, avg=26314.75, stdev=23949.00, samples=20 00:19:24.029 iops : min= 24, max= 381, avg=102.70, stdev=93.54, samples=20 00:19:24.030 lat (msec) : 20=0.09%, 100=3.66%, 250=29.40%, 500=2.38%, 750=21.43% 00:19:24.030 lat (msec) : 1000=38.92%, 2000=4.12% 00:19:24.030 cpu : usr=0.02%, sys=0.48%, ctx=189, majf=0, minf=4098 00:19:24.030 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.7%, 16=1.5%, 32=2.9%, >=64=94.2% 00:19:24.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.030 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.030 issued rwts: total=1092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.030 job8: (groupid=0, jobs=1): err= 0: pid=77372: Sun Nov 17 01:39:30 2024 00:19:24.030 read: IOPS=87, BW=21.9MiB/s (23.0MB/s)(223MiB/10170msec) 00:19:24.030 slat (usec): min=19, max=464738, avg=10609.61, stdev=32088.99 00:19:24.030 clat (msec): min=45, max=987, avg=719.02, stdev=163.72 00:19:24.030 lat (msec): min=46, max=987, avg=729.63, stdev=165.21 00:19:24.030 clat percentiles (msec): 00:19:24.030 | 1.00th=[ 59], 5.00th=[ 443], 10.00th=[ 558], 20.00th=[ 651], 00:19:24.030 | 30.00th=[ 684], 40.00th=[ 718], 50.00th=[ 743], 60.00th=[ 768], 00:19:24.030 | 70.00th=[ 793], 80.00th=[ 835], 90.00th=[ 877], 95.00th=[ 911], 00:19:24.030 | 99.00th=[ 953], 99.50th=[ 986], 99.90th=[ 986], 99.95th=[ 986], 00:19:24.030 | 99.99th=[ 986] 00:19:24.030 bw ( KiB/s): min=14336, max=32256, per=2.72%, avg=21192.05, stdev=4350.03, samples=20 00:19:24.030 iops : min= 56, max= 126, avg=82.65, stdev=17.02, samples=20 00:19:24.030 lat (msec) : 50=0.56%, 100=2.58%, 250=0.11%, 500=2.24%, 750=46.91% 00:19:24.030 lat (msec) : 1000=47.59% 00:19:24.030 cpu : usr=0.01%, sys=0.45%, ctx=182, majf=0, minf=4097 00:19:24.030 IO depths : 1=0.1%, 2=0.2%, 4=0.4%, 8=0.9%, 16=1.8%, 32=3.6%, >=64=92.9% 00:19:24.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.030 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.030 issued rwts: total=891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.030 job9: (groupid=0, jobs=1): err= 0: pid=77373: Sun Nov 17 01:39:30 2024 00:19:24.030 read: IOPS=167, BW=41.8MiB/s (43.9MB/s)(424MiB/10120msec) 00:19:24.030 slat (usec): min=20, max=115775, avg=5898.86, stdev=14285.28 00:19:24.030 clat (msec): min=14, max=562, avg=375.94, stdev=64.41 00:19:24.030 lat (msec): min=16, max=562, avg=381.84, stdev=65.07 00:19:24.030 clat percentiles (msec): 00:19:24.030 | 1.00th=[ 112], 5.00th=[ 292], 10.00th=[ 321], 20.00th=[ 342], 00:19:24.030 | 30.00th=[ 359], 40.00th=[ 372], 50.00th=[ 384], 60.00th=[ 393], 00:19:24.030 | 70.00th=[ 405], 80.00th=[ 422], 90.00th=[ 439], 95.00th=[ 456], 00:19:24.030 | 99.00th=[ 502], 99.50th=[ 518], 99.90th=[ 567], 99.95th=[ 567], 00:19:24.030 | 99.99th=[ 567] 00:19:24.030 bw ( KiB/s): min=33346, max=47520, per=5.37%, avg=41774.15, stdev=3104.88, samples=20 00:19:24.030 iops : min= 130, max= 185, avg=163.05, stdev=12.03, samples=20 00:19:24.030 lat (msec) : 20=0.06%, 100=0.77%, 250=2.72%, 500=95.34%, 750=1.12% 00:19:24.030 cpu : usr=0.15%, sys=0.68%, ctx=342, majf=0, minf=4097 00:19:24.030 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.3% 00:19:24.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.030 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.030 issued rwts: total=1694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.030 job10: (groupid=0, jobs=1): err= 0: pid=77374: Sun Nov 17 01:39:30 2024 00:19:24.030 read: IOPS=1211, BW=303MiB/s (317MB/s)(3050MiB/10075msec) 00:19:24.030 slat (usec): min=15, max=47379, avg=813.53, stdev=2292.42 00:19:24.030 clat (msec): min=11, max=248, avg=51.96, stdev=34.19 00:19:24.030 lat (msec): min=11, max=248, avg=52.77, stdev=34.70 00:19:24.030 clat percentiles (msec): 00:19:24.030 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 38], 20.00th=[ 39], 00:19:24.030 | 30.00th=[ 41], 40.00th=[ 42], 50.00th=[ 43], 60.00th=[ 43], 00:19:24.030 | 70.00th=[ 45], 80.00th=[ 46], 90.00th=[ 54], 95.00th=[ 161], 00:19:24.030 | 99.00th=[ 188], 99.50th=[ 199], 99.90th=[ 226], 99.95th=[ 236], 00:19:24.030 | 99.99th=[ 249] 00:19:24.030 bw ( KiB/s): min=91465, max=411648, per=39.91%, avg=310656.85, stdev=125305.64, samples=20 00:19:24.030 iops : min= 357, max= 1608, avg=1213.40, stdev=489.45, samples=20 00:19:24.030 lat (msec) : 20=0.02%, 50=87.38%, 100=4.59%, 250=8.01% 00:19:24.030 cpu : usr=0.53%, sys=4.24%, ctx=2391, majf=0, minf=4097 00:19:24.030 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:19:24.030 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:24.030 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:24.030 issued rwts: total=12201,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:24.030 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:24.030 00:19:24.030 Run status group 0 (all jobs): 00:19:24.030 READ: bw=760MiB/s (797MB/s), 20.3MiB/s-303MiB/s (21.3MB/s-317MB/s), io=7733MiB (8108MB), run=10014-10173msec 00:19:24.030 00:19:24.030 Disk stats (read/write): 00:19:24.030 nvme0n1: ios=3251/0, merge=0/0, ticks=1224206/0, in_queue=1224206, util=97.76% 00:19:24.030 nvme10n1: ios=4041/0, merge=0/0, ticks=1233200/0, in_queue=1233200, util=97.95% 00:19:24.030 nvme1n1: ios=1682/0, merge=0/0, ticks=1203022/0, in_queue=1203022, util=98.22% 00:19:24.030 nvme2n1: ios=1577/0, merge=0/0, ticks=1208958/0, in_queue=1208958, util=98.29% 00:19:24.030 nvme3n1: ios=1531/0, merge=0/0, ticks=1184359/0, in_queue=1184359, util=98.31% 00:19:24.030 nvme4n1: ios=10934/0, merge=0/0, ticks=1242039/0, in_queue=1242039, util=98.50% 00:19:24.030 nvme5n1: ios=6239/0, merge=0/0, ticks=1214290/0, in_queue=1214290, util=98.53% 00:19:24.030 nvme6n1: ios=2056/0, merge=0/0, ticks=1179788/0, in_queue=1179788, util=98.70% 00:19:24.030 nvme7n1: ios=1664/0, merge=0/0, ticks=1197341/0, in_queue=1197341, util=98.98% 00:19:24.030 nvme8n1: ios=3261/0, merge=0/0, ticks=1223165/0, in_queue=1223165, util=99.05% 00:19:24.030 nvme9n1: ios=24274/0, merge=0/0, ticks=1228495/0, in_queue=1228495, util=99.16% 00:19:24.030 01:39:30 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:19:24.030 [global] 00:19:24.030 thread=1 00:19:24.030 invalidate=1 00:19:24.030 rw=randwrite 00:19:24.030 time_based=1 00:19:24.030 runtime=10 00:19:24.030 ioengine=libaio 00:19:24.030 direct=1 00:19:24.030 bs=262144 00:19:24.030 iodepth=64 00:19:24.030 norandommap=1 00:19:24.030 numjobs=1 00:19:24.030 00:19:24.030 [job0] 00:19:24.030 filename=/dev/nvme0n1 00:19:24.030 [job1] 00:19:24.030 filename=/dev/nvme10n1 00:19:24.030 [job2] 00:19:24.030 filename=/dev/nvme1n1 00:19:24.030 [job3] 00:19:24.030 filename=/dev/nvme2n1 00:19:24.030 [job4] 00:19:24.030 filename=/dev/nvme3n1 00:19:24.030 [job5] 00:19:24.030 filename=/dev/nvme4n1 00:19:24.030 [job6] 00:19:24.030 filename=/dev/nvme5n1 00:19:24.030 [job7] 00:19:24.030 filename=/dev/nvme6n1 00:19:24.030 [job8] 00:19:24.030 filename=/dev/nvme7n1 00:19:24.030 [job9] 00:19:24.030 filename=/dev/nvme8n1 00:19:24.030 [job10] 00:19:24.030 filename=/dev/nvme9n1 00:19:24.030 Could not set queue depth (nvme0n1) 00:19:24.030 Could not set queue depth (nvme10n1) 00:19:24.030 Could not set queue depth (nvme1n1) 00:19:24.030 Could not set queue depth (nvme2n1) 00:19:24.030 Could not set queue depth (nvme3n1) 00:19:24.030 Could not set queue depth (nvme4n1) 00:19:24.030 Could not set queue depth (nvme5n1) 00:19:24.030 Could not set queue depth (nvme6n1) 00:19:24.030 Could not set queue depth (nvme7n1) 00:19:24.030 Could not set queue depth (nvme8n1) 00:19:24.030 Could not set queue depth (nvme9n1) 00:19:24.030 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:19:24.030 fio-3.35 00:19:24.030 Starting 11 threads 00:19:34.011 00:19:34.011 job0: (groupid=0, jobs=1): err= 0: pid=77568: Sun Nov 17 01:39:41 2024 00:19:34.011 write: IOPS=232, BW=58.2MiB/s (61.1MB/s)(586MiB/10066msec); 0 zone resets 00:19:34.011 slat (usec): min=15, max=66726, avg=4134.06, stdev=8060.71 00:19:34.011 clat (msec): min=2, max=337, avg=270.51, stdev=96.68 00:19:34.011 lat (msec): min=2, max=338, avg=274.65, stdev=98.14 00:19:34.011 clat percentiles (msec): 00:19:34.011 | 1.00th=[ 8], 5.00th=[ 30], 10.00th=[ 71], 20.00th=[ 279], 00:19:34.011 | 30.00th=[ 300], 40.00th=[ 305], 50.00th=[ 317], 60.00th=[ 321], 00:19:34.011 | 70.00th=[ 321], 80.00th=[ 321], 90.00th=[ 326], 95.00th=[ 330], 00:19:34.011 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 338], 99.95th=[ 338], 00:19:34.011 | 99.99th=[ 338] 00:19:34.011 bw ( KiB/s): min=49152, max=196096, per=7.19%, avg=58424.45, stdev=32430.53, samples=20 00:19:34.011 iops : min= 192, max= 766, avg=228.20, stdev=126.69, samples=20 00:19:34.011 lat (msec) : 4=0.13%, 10=1.36%, 20=1.92%, 50=4.18%, 100=6.61% 00:19:34.011 lat (msec) : 250=3.75%, 500=82.05% 00:19:34.011 cpu : usr=0.40%, sys=0.62%, ctx=2344, majf=0, minf=1 00:19:34.011 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.4%, >=64=97.3% 00:19:34.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.011 issued rwts: total=0,2345,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.011 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.011 job1: (groupid=0, jobs=1): err= 0: pid=77569: Sun Nov 17 01:39:41 2024 00:19:34.011 write: IOPS=202, BW=50.7MiB/s (53.1MB/s)(513MiB/10128msec); 0 zone resets 00:19:34.011 slat (usec): min=15, max=154281, avg=4867.66, stdev=9132.82 00:19:34.011 clat (msec): min=125, max=389, avg=310.91, stdev=28.13 00:19:34.011 lat (msec): min=135, max=408, avg=315.78, stdev=27.45 00:19:34.011 clat percentiles (msec): 00:19:34.011 | 1.00th=[ 174], 5.00th=[ 257], 10.00th=[ 296], 20.00th=[ 305], 00:19:34.011 | 30.00th=[ 309], 40.00th=[ 317], 50.00th=[ 321], 60.00th=[ 321], 00:19:34.011 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 330], 95.00th=[ 334], 00:19:34.011 | 99.00th=[ 342], 99.50th=[ 355], 99.90th=[ 388], 99.95th=[ 388], 00:19:34.011 | 99.99th=[ 388] 00:19:34.011 bw ( KiB/s): min=38476, max=61952, per=6.26%, avg=50911.90, stdev=3912.03, samples=20 00:19:34.011 iops : min= 150, max= 242, avg=198.80, stdev=15.33, samples=20 00:19:34.011 lat (msec) : 250=4.14%, 500=95.86% 00:19:34.011 cpu : usr=0.34%, sys=0.63%, ctx=928, majf=0, minf=1 00:19:34.011 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:19:34.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.011 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.011 issued rwts: total=0,2052,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.011 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.011 job2: (groupid=0, jobs=1): err= 0: pid=77581: Sun Nov 17 01:39:41 2024 00:19:34.011 write: IOPS=375, BW=93.8MiB/s (98.3MB/s)(957MiB/10207msec); 0 zone resets 00:19:34.011 slat (usec): min=17, max=81059, avg=2517.58, stdev=4660.61 00:19:34.011 clat (msec): min=20, max=428, avg=168.06, stdev=31.28 00:19:34.011 lat (msec): min=21, max=428, avg=170.58, stdev=31.02 00:19:34.011 clat percentiles (msec): 00:19:34.011 | 1.00th=[ 69], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:19:34.011 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 167], 60.00th=[ 169], 00:19:34.011 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 174], 95.00th=[ 180], 00:19:34.011 | 99.00th=[ 338], 99.50th=[ 384], 99.90th=[ 422], 99.95th=[ 426], 00:19:34.011 | 99.99th=[ 430] 00:19:34.011 bw ( KiB/s): min=86528, max=98304, per=11.85%, avg=96339.10, stdev=2703.21, samples=20 00:19:34.011 iops : min= 338, max= 384, avg=376.30, stdev=10.56, samples=20 00:19:34.011 lat (msec) : 50=0.52%, 100=1.28%, 250=96.03%, 500=2.17% 00:19:34.011 cpu : usr=0.56%, sys=1.03%, ctx=3561, majf=0, minf=1 00:19:34.011 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:19:34.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.011 issued rwts: total=0,3828,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.011 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.011 job3: (groupid=0, jobs=1): err= 0: pid=77582: Sun Nov 17 01:39:41 2024 00:19:34.011 write: IOPS=205, BW=51.4MiB/s (53.9MB/s)(521MiB/10138msec); 0 zone resets 00:19:34.011 slat (usec): min=16, max=40541, avg=4793.68, stdev=8446.32 00:19:34.011 clat (msec): min=39, max=369, avg=306.35, stdev=36.68 00:19:34.011 lat (msec): min=39, max=369, avg=311.15, stdev=36.49 00:19:34.011 clat percentiles (msec): 00:19:34.011 | 1.00th=[ 125], 5.00th=[ 249], 10.00th=[ 288], 20.00th=[ 300], 00:19:34.011 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:19:34.011 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 326], 95.00th=[ 330], 00:19:34.011 | 99.00th=[ 338], 99.50th=[ 338], 99.90th=[ 351], 99.95th=[ 372], 00:19:34.011 | 99.99th=[ 372] 00:19:34.011 bw ( KiB/s): min=49152, max=59904, per=6.37%, avg=51737.95, stdev=2279.00, samples=20 00:19:34.011 iops : min= 192, max= 234, avg=202.05, stdev= 8.88, samples=20 00:19:34.011 lat (msec) : 50=0.19%, 100=0.58%, 250=4.51%, 500=94.72% 00:19:34.011 cpu : usr=0.41%, sys=0.56%, ctx=2503, majf=0, minf=1 00:19:34.011 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:34.011 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.011 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.011 issued rwts: total=0,2084,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.011 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.011 job4: (groupid=0, jobs=1): err= 0: pid=77583: Sun Nov 17 01:39:41 2024 00:19:34.011 write: IOPS=241, BW=60.4MiB/s (63.3MB/s)(616MiB/10201msec); 0 zone resets 00:19:34.011 slat (usec): min=16, max=192351, avg=3948.20, stdev=7997.87 00:19:34.011 clat (msec): min=128, max=495, avg=261.01, stdev=43.96 00:19:34.011 lat (msec): min=128, max=495, avg=264.96, stdev=43.81 00:19:34.011 clat percentiles (msec): 00:19:34.011 | 1.00th=[ 213], 5.00th=[ 230], 10.00th=[ 236], 20.00th=[ 241], 00:19:34.012 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:19:34.012 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 300], 95.00th=[ 372], 00:19:34.012 | 99.00th=[ 451], 99.50th=[ 472], 99.90th=[ 493], 99.95th=[ 493], 00:19:34.012 | 99.99th=[ 498] 00:19:34.012 bw ( KiB/s): min=32768, max=69632, per=7.56%, avg=61444.70, stdev=9365.80, samples=20 00:19:34.012 iops : min= 128, max= 272, avg=240.00, stdev=36.61, samples=20 00:19:34.012 lat (msec) : 250=44.99%, 500=55.01% 00:19:34.012 cpu : usr=0.31%, sys=0.69%, ctx=2563, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.4% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.012 issued rwts: total=0,2463,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.012 job5: (groupid=0, jobs=1): err= 0: pid=77584: Sun Nov 17 01:39:41 2024 00:19:34.012 write: IOPS=242, BW=60.7MiB/s (63.6MB/s)(615MiB/10141msec); 0 zone resets 00:19:34.012 slat (usec): min=18, max=249127, avg=4058.18, stdev=8635.11 00:19:34.012 clat (msec): min=137, max=568, avg=259.56, stdev=47.05 00:19:34.012 lat (msec): min=146, max=568, avg=263.61, stdev=47.06 00:19:34.012 clat percentiles (msec): 00:19:34.012 | 1.00th=[ 184], 5.00th=[ 228], 10.00th=[ 234], 20.00th=[ 241], 00:19:34.012 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:19:34.012 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 279], 95.00th=[ 388], 00:19:34.012 | 99.00th=[ 439], 99.50th=[ 502], 99.90th=[ 567], 99.95th=[ 567], 00:19:34.012 | 99.99th=[ 567] 00:19:34.012 bw ( KiB/s): min=24576, max=70144, per=7.55%, avg=61384.70, stdev=11097.79, samples=20 00:19:34.012 iops : min= 96, max= 274, avg=239.75, stdev=43.42, samples=20 00:19:34.012 lat (msec) : 250=46.49%, 500=52.91%, 750=0.61% 00:19:34.012 cpu : usr=0.46%, sys=0.72%, ctx=2978, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.012 issued rwts: total=0,2461,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.012 job6: (groupid=0, jobs=1): err= 0: pid=77585: Sun Nov 17 01:39:41 2024 00:19:34.012 write: IOPS=206, BW=51.6MiB/s (54.1MB/s)(523MiB/10142msec); 0 zone resets 00:19:34.012 slat (usec): min=17, max=35102, avg=4775.55, stdev=8405.77 00:19:34.012 clat (msec): min=37, max=364, avg=305.23, stdev=37.66 00:19:34.012 lat (msec): min=37, max=364, avg=310.00, stdev=37.53 00:19:34.012 clat percentiles (msec): 00:19:34.012 | 1.00th=[ 127], 5.00th=[ 241], 10.00th=[ 284], 20.00th=[ 300], 00:19:34.012 | 30.00th=[ 305], 40.00th=[ 313], 50.00th=[ 317], 60.00th=[ 321], 00:19:34.012 | 70.00th=[ 321], 80.00th=[ 326], 90.00th=[ 326], 95.00th=[ 330], 00:19:34.012 | 99.00th=[ 334], 99.50th=[ 338], 99.90th=[ 347], 99.95th=[ 347], 00:19:34.012 | 99.99th=[ 363] 00:19:34.012 bw ( KiB/s): min=49053, max=60416, per=6.39%, avg=51969.00, stdev=2857.85, samples=20 00:19:34.012 iops : min= 191, max= 236, avg=202.95, stdev=11.13, samples=20 00:19:34.012 lat (msec) : 50=0.19%, 100=0.57%, 250=5.21%, 500=94.03% 00:19:34.012 cpu : usr=0.45%, sys=0.58%, ctx=2597, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.012 issued rwts: total=0,2093,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.012 job7: (groupid=0, jobs=1): err= 0: pid=77586: Sun Nov 17 01:39:41 2024 00:19:34.012 write: IOPS=620, BW=155MiB/s (163MB/s)(1584MiB/10220msec); 0 zone resets 00:19:34.012 slat (usec): min=18, max=78401, avg=1510.41, stdev=3262.08 00:19:34.012 clat (usec): min=1610, max=518205, avg=101646.64, stdev=58383.14 00:19:34.012 lat (msec): min=2, max=518, avg=103.16, stdev=59.04 00:19:34.012 clat percentiles (msec): 00:19:34.012 | 1.00th=[ 15], 5.00th=[ 80], 10.00th=[ 86], 20.00th=[ 88], 00:19:34.012 | 30.00th=[ 90], 40.00th=[ 92], 50.00th=[ 92], 60.00th=[ 93], 00:19:34.012 | 70.00th=[ 94], 80.00th=[ 94], 90.00th=[ 96], 95.00th=[ 211], 00:19:34.012 | 99.00th=[ 384], 99.50th=[ 401], 99.90th=[ 481], 99.95th=[ 498], 00:19:34.012 | 99.99th=[ 518] 00:19:34.012 bw ( KiB/s): min=47104, max=216654, per=19.76%, avg=160567.10, stdev=45928.87, samples=20 00:19:34.012 iops : min= 184, max= 846, avg=627.20, stdev=179.39, samples=20 00:19:34.012 lat (msec) : 2=0.02%, 4=0.03%, 10=0.58%, 20=0.85%, 50=2.00% 00:19:34.012 lat (msec) : 100=90.83%, 250=1.17%, 500=4.48%, 750=0.03% 00:19:34.012 cpu : usr=0.95%, sys=1.77%, ctx=2328, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.012 issued rwts: total=0,6337,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.012 job8: (groupid=0, jobs=1): err= 0: pid=77587: Sun Nov 17 01:39:41 2024 00:19:34.012 write: IOPS=370, BW=92.7MiB/s (97.2MB/s)(946MiB/10205msec); 0 zone resets 00:19:34.012 slat (usec): min=15, max=77630, avg=2581.38, stdev=4782.96 00:19:34.012 clat (msec): min=6, max=527, avg=169.85, stdev=37.63 00:19:34.012 lat (msec): min=6, max=527, avg=172.43, stdev=37.84 00:19:34.012 clat percentiles (msec): 00:19:34.012 | 1.00th=[ 44], 5.00th=[ 157], 10.00th=[ 159], 20.00th=[ 161], 00:19:34.012 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 169], 60.00th=[ 169], 00:19:34.012 | 70.00th=[ 169], 80.00th=[ 171], 90.00th=[ 174], 95.00th=[ 180], 00:19:34.012 | 99.00th=[ 351], 99.50th=[ 435], 99.90th=[ 506], 99.95th=[ 527], 00:19:34.012 | 99.99th=[ 527] 00:19:34.012 bw ( KiB/s): min=53760, max=104750, per=11.72%, avg=95272.70, stdev=9990.70, samples=20 00:19:34.012 iops : min= 210, max= 409, avg=372.15, stdev=39.02, samples=20 00:19:34.012 lat (msec) : 10=0.21%, 20=0.34%, 50=0.58%, 100=0.71%, 250=95.51% 00:19:34.012 lat (msec) : 500=2.48%, 750=0.16% 00:19:34.012 cpu : usr=0.69%, sys=1.16%, ctx=4107, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.012 issued rwts: total=0,3785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.012 job9: (groupid=0, jobs=1): err= 0: pid=77589: Sun Nov 17 01:39:41 2024 00:19:34.012 write: IOPS=246, BW=61.5MiB/s (64.5MB/s)(624MiB/10138msec); 0 zone resets 00:19:34.012 slat (usec): min=16, max=78511, avg=4000.33, stdev=7237.51 00:19:34.012 clat (msec): min=43, max=439, avg=255.85, stdev=42.63 00:19:34.012 lat (msec): min=44, max=439, avg=259.85, stdev=42.74 00:19:34.012 clat percentiles (msec): 00:19:34.012 | 1.00th=[ 146], 5.00th=[ 226], 10.00th=[ 234], 20.00th=[ 239], 00:19:34.012 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:19:34.012 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 279], 95.00th=[ 372], 00:19:34.012 | 99.00th=[ 409], 99.50th=[ 430], 99.90th=[ 439], 99.95th=[ 439], 00:19:34.012 | 99.99th=[ 439] 00:19:34.012 bw ( KiB/s): min=40878, max=68096, per=7.66%, avg=62275.75, stdev=7995.86, samples=20 00:19:34.012 iops : min= 159, max= 266, avg=243.20, stdev=31.38, samples=20 00:19:34.012 lat (msec) : 50=0.16%, 100=0.32%, 250=46.47%, 500=53.04% 00:19:34.012 cpu : usr=0.52%, sys=0.68%, ctx=2560, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.012 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.012 issued rwts: total=0,2496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.012 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.012 job10: (groupid=0, jobs=1): err= 0: pid=77590: Sun Nov 17 01:39:41 2024 00:19:34.012 write: IOPS=246, BW=61.6MiB/s (64.6MB/s)(625MiB/10147msec); 0 zone resets 00:19:34.012 slat (usec): min=15, max=81924, avg=3997.20, stdev=7214.93 00:19:34.012 clat (msec): min=84, max=421, avg=255.66, stdev=38.39 00:19:34.012 lat (msec): min=84, max=421, avg=259.66, stdev=38.38 00:19:34.012 clat percentiles (msec): 00:19:34.012 | 1.00th=[ 157], 5.00th=[ 228], 10.00th=[ 234], 20.00th=[ 239], 00:19:34.012 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:19:34.012 | 70.00th=[ 255], 80.00th=[ 257], 90.00th=[ 284], 95.00th=[ 355], 00:19:34.012 | 99.00th=[ 405], 99.50th=[ 409], 99.90th=[ 422], 99.95th=[ 422], 00:19:34.012 | 99.99th=[ 422] 00:19:34.012 bw ( KiB/s): min=40960, max=70144, per=7.68%, avg=62382.25, stdev=8102.74, samples=20 00:19:34.012 iops : min= 160, max= 274, avg=243.65, stdev=31.70, samples=20 00:19:34.012 lat (msec) : 100=0.16%, 250=46.44%, 500=53.40% 00:19:34.012 cpu : usr=0.46%, sys=0.69%, ctx=2899, majf=0, minf=1 00:19:34.012 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.3%, >=64=97.5% 00:19:34.012 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:34.013 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:19:34.013 issued rwts: total=0,2500,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:34.013 latency : target=0, window=0, percentile=100.00%, depth=64 00:19:34.013 00:19:34.013 Run status group 0 (all jobs): 00:19:34.013 WRITE: bw=794MiB/s (832MB/s), 50.7MiB/s-155MiB/s (53.1MB/s-163MB/s), io=8111MiB (8505MB), run=10066-10220msec 00:19:34.013 00:19:34.013 Disk stats (read/write): 00:19:34.013 nvme0n1: ios=49/4464, merge=0/0, ticks=48/1211933, in_queue=1211981, util=97.41% 00:19:34.013 nvme10n1: ios=49/3925, merge=0/0, ticks=57/1200219, in_queue=1200276, util=97.52% 00:19:34.013 nvme1n1: ios=32/7625, merge=0/0, ticks=55/1232588, in_queue=1232643, util=97.78% 00:19:34.013 nvme2n1: ios=0/4003, merge=0/0, ticks=0/1201668, in_queue=1201668, util=97.82% 00:19:34.013 nvme3n1: ios=0/4914, merge=0/0, ticks=0/1236479, in_queue=1236479, util=97.90% 00:19:34.013 nvme4n1: ios=0/4740, merge=0/0, ticks=0/1203540, in_queue=1203540, util=98.07% 00:19:34.013 nvme5n1: ios=0/4016, merge=0/0, ticks=0/1201880, in_queue=1201880, util=98.33% 00:19:34.013 nvme6n1: ios=0/12652, merge=0/0, ticks=0/1235945, in_queue=1235945, util=98.45% 00:19:34.013 nvme7n1: ios=0/7554, merge=0/0, ticks=0/1234625, in_queue=1234625, util=98.62% 00:19:34.013 nvme8n1: ios=0/4821, merge=0/0, ticks=0/1204117, in_queue=1204117, util=98.76% 00:19:34.013 nvme9n1: ios=0/4824, merge=0/0, ticks=0/1204631, in_queue=1204631, util=98.86% 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:34.013 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:19:34.013 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:19:34.013 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.013 01:39:41 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:19:34.013 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:19:34.013 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:19:34.013 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:19:34.013 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:19:34.014 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.014 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:19:34.273 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:19:34.273 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:19:34.273 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:19:34.273 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.274 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:19:34.533 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:19:34.533 rmmod nvme_tcp 00:19:34.533 rmmod nvme_fabrics 00:19:34.533 rmmod nvme_keyring 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 76902 ']' 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 76902 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 76902 ']' 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 76902 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76902 00:19:34.533 killing process with pid 76902 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76902' 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 76902 00:19:34.533 01:39:42 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 76902 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@246 -- # remove_spdk_ns 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@300 -- # return 0 00:19:37.158 00:19:37.158 real 0m52.282s 00:19:37.158 user 2m59.539s 00:19:37.158 sys 0m25.573s 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.158 ************************************ 00:19:37.158 END TEST nvmf_multiconnection 00:19:37.158 ************************************ 00:19:37.158 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.418 ************************************ 00:19:37.418 START TEST nvmf_initiator_timeout 00:19:37.418 ************************************ 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:19:37.418 * Looking for test storage... 00:19:37.418 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.418 --rc genhtml_branch_coverage=1 00:19:37.418 --rc genhtml_function_coverage=1 00:19:37.418 --rc genhtml_legend=1 00:19:37.418 --rc geninfo_all_blocks=1 00:19:37.418 --rc geninfo_unexecuted_blocks=1 00:19:37.418 00:19:37.418 ' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.418 --rc genhtml_branch_coverage=1 00:19:37.418 --rc genhtml_function_coverage=1 00:19:37.418 --rc genhtml_legend=1 00:19:37.418 --rc geninfo_all_blocks=1 00:19:37.418 --rc geninfo_unexecuted_blocks=1 00:19:37.418 00:19:37.418 ' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.418 --rc genhtml_branch_coverage=1 00:19:37.418 --rc genhtml_function_coverage=1 00:19:37.418 --rc genhtml_legend=1 00:19:37.418 --rc geninfo_all_blocks=1 00:19:37.418 --rc geninfo_unexecuted_blocks=1 00:19:37.418 00:19:37.418 ' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.418 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.418 --rc genhtml_branch_coverage=1 00:19:37.418 --rc genhtml_function_coverage=1 00:19:37.418 --rc genhtml_legend=1 00:19:37.418 --rc geninfo_all_blocks=1 00:19:37.418 --rc geninfo_unexecuted_blocks=1 00:19:37.418 00:19:37.418 ' 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.418 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:37.419 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:37.419 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:19:37.678 Cannot find device "nvmf_init_br" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@162 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:19:37.678 Cannot find device "nvmf_init_br2" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@163 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:19:37.678 Cannot find device "nvmf_tgt_br" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@164 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:19:37.678 Cannot find device "nvmf_tgt_br2" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@165 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:19:37.678 Cannot find device "nvmf_init_br" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@166 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:19:37.678 Cannot find device "nvmf_init_br2" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@167 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:19:37.678 Cannot find device "nvmf_tgt_br" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@168 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:19:37.678 Cannot find device "nvmf_tgt_br2" 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@169 -- # true 00:19:37.678 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:19:37.678 Cannot find device "nvmf_br" 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@170 -- # true 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:19:37.679 Cannot find device "nvmf_init_if" 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@171 -- # true 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:19:37.679 Cannot find device "nvmf_init_if2" 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@172 -- # true 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:37.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@173 -- # true 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:37.679 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@174 -- # true 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:19:37.679 01:39:45 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:19:37.679 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:19:37.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:37.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:19:37.938 00:19:37.938 --- 10.0.0.3 ping statistics --- 00:19:37.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.938 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:19:37.938 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:19:37.938 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.046 ms 00:19:37.938 00:19:37.938 --- 10.0.0.4 ping statistics --- 00:19:37.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.938 rtt min/avg/max/mdev = 0.046/0.046/0.046/0.000 ms 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:37.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:37.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:19:37.938 00:19:37.938 --- 10.0.0.1 ping statistics --- 00:19:37.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.938 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:19:37.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:37.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.062 ms 00:19:37.938 00:19:37.938 --- 10.0.0.2 ping statistics --- 00:19:37.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:37.938 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@461 -- # return 0 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=78032 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 78032 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 78032 ']' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:37.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:37.938 01:39:46 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:37.938 [2024-11-17 01:39:46.360139] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:19:37.938 [2024-11-17 01:39:46.360321] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.197 [2024-11-17 01:39:46.542252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.197 [2024-11-17 01:39:46.630544] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.197 [2024-11-17 01:39:46.630878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.197 [2024-11-17 01:39:46.630910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.197 [2024-11-17 01:39:46.630923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.197 [2024-11-17 01:39:46.630935] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.197 [2024-11-17 01:39:46.632771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:38.197 [2024-11-17 01:39:46.632922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.197 [2024-11-17 01:39:46.632989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.197 [2024-11-17 01:39:46.632970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:38.455 [2024-11-17 01:39:46.796568] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.022 Malloc0 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.022 Delay0 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.022 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.022 [2024-11-17 01:39:47.469745] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:39.280 [2024-11-17 01:39:47.502080] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:19:39.280 01:39:47 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=78096 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:19:41.808 01:39:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:19:41.808 [global] 00:19:41.808 thread=1 00:19:41.808 invalidate=1 00:19:41.808 rw=write 00:19:41.808 time_based=1 00:19:41.808 runtime=60 00:19:41.808 ioengine=libaio 00:19:41.808 direct=1 00:19:41.808 bs=4096 00:19:41.808 iodepth=1 00:19:41.808 norandommap=0 00:19:41.808 numjobs=1 00:19:41.808 00:19:41.808 verify_dump=1 00:19:41.808 verify_backlog=512 00:19:41.808 verify_state_save=0 00:19:41.808 do_verify=1 00:19:41.808 verify=crc32c-intel 00:19:41.808 [job0] 00:19:41.808 filename=/dev/nvme0n1 00:19:41.808 Could not set queue depth (nvme0n1) 00:19:41.808 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:41.808 fio-3.35 00:19:41.808 Starting 1 thread 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.339 true 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.339 true 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.339 true 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:44.339 true 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.339 01:39:52 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:19:47.619 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:19:47.619 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.619 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.619 true 00:19:47.619 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.620 true 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.620 true 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:19:47.620 true 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:19:47.620 01:39:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 78096 00:20:43.860 00:20:43.860 job0: (groupid=0, jobs=1): err= 0: pid=78117: Sun Nov 17 01:40:49 2024 00:20:43.860 read: IOPS=741, BW=2965KiB/s (3036kB/s)(174MiB/60001msec) 00:20:43.860 slat (nsec): min=10219, max=63483, avg=12671.96, stdev=3667.53 00:20:43.860 clat (usec): min=181, max=3656, avg=220.67, stdev=34.74 00:20:43.860 lat (usec): min=193, max=3674, avg=233.34, stdev=35.78 00:20:43.860 clat percentiles (usec): 00:20:43.860 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:20:43.860 | 30.00th=[ 206], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 219], 00:20:43.860 | 70.00th=[ 227], 80.00th=[ 237], 90.00th=[ 253], 95.00th=[ 269], 00:20:43.860 | 99.00th=[ 302], 99.50th=[ 314], 99.90th=[ 424], 99.95th=[ 553], 00:20:43.860 | 99.99th=[ 1188] 00:20:43.860 write: IOPS=742, BW=2970KiB/s (3041kB/s)(174MiB/60001msec); 0 zone resets 00:20:43.860 slat (usec): min=12, max=10422, avg=19.87, stdev=61.97 00:20:43.860 clat (usec): min=74, max=40592k, avg=1090.74, stdev=192330.17 00:20:43.860 lat (usec): min=156, max=40592k, avg=1110.61, stdev=192330.17 00:20:43.860 clat percentiles (usec): 00:20:43.860 | 1.00th=[ 149], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:20:43.860 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:20:43.860 | 70.00th=[ 184], 80.00th=[ 194], 90.00th=[ 208], 95.00th=[ 221], 00:20:43.860 | 99.00th=[ 247], 99.50th=[ 260], 99.90th=[ 392], 99.95th=[ 498], 00:20:43.860 | 99.99th=[ 4080] 00:20:43.860 bw ( KiB/s): min= 2640, max=10968, per=100.00%, avg=8926.74, stdev=1530.91, samples=39 00:20:43.860 iops : min= 660, max= 2742, avg=2231.67, stdev=382.74, samples=39 00:20:43.860 lat (usec) : 100=0.01%, 250=93.97%, 500=5.98%, 750=0.03%, 1000=0.01% 00:20:43.860 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, >=2000=0.01% 00:20:43.860 cpu : usr=0.52%, sys=1.96%, ctx=89029, majf=0, minf=5 00:20:43.860 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:43.860 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.860 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:43.860 issued rwts: total=44475,44544,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:43.860 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:43.860 00:20:43.860 Run status group 0 (all jobs): 00:20:43.860 READ: bw=2965KiB/s (3036kB/s), 2965KiB/s-2965KiB/s (3036kB/s-3036kB/s), io=174MiB (182MB), run=60001-60001msec 00:20:43.860 WRITE: bw=2970KiB/s (3041kB/s), 2970KiB/s-2970KiB/s (3041kB/s-3041kB/s), io=174MiB (182MB), run=60001-60001msec 00:20:43.860 00:20:43.860 Disk stats (read/write): 00:20:43.860 nvme0n1: ios=44294/44544, merge=0/0, ticks=10088/8265, in_queue=18353, util=99.59% 00:20:43.860 01:40:49 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:43.860 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:43.860 nvmf hotplug test: fio successful as expected 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.860 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:43.861 rmmod nvme_tcp 00:20:43.861 rmmod nvme_fabrics 00:20:43.861 rmmod nvme_keyring 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 78032 ']' 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 78032 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 78032 ']' 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 78032 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78032 00:20:43.861 killing process with pid 78032 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78032' 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 78032 00:20:43.861 01:40:50 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 78032 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@300 -- # return 0 00:20:43.861 00:20:43.861 real 1m5.716s 00:20:43.861 user 3m56.655s 00:20:43.861 sys 0m20.638s 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:43.861 ************************************ 00:20:43.861 END TEST nvmf_initiator_timeout 00:20:43.861 ************************************ 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ virt == phy ]] 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:43.861 ************************************ 00:20:43.861 START TEST nvmf_nsid 00:20:43.861 ************************************ 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:20:43.861 * Looking for test storage... 00:20:43.861 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:43.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.861 --rc genhtml_branch_coverage=1 00:20:43.861 --rc genhtml_function_coverage=1 00:20:43.861 --rc genhtml_legend=1 00:20:43.861 --rc geninfo_all_blocks=1 00:20:43.861 --rc geninfo_unexecuted_blocks=1 00:20:43.861 00:20:43.861 ' 00:20:43.861 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:43.861 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.861 --rc genhtml_branch_coverage=1 00:20:43.861 --rc genhtml_function_coverage=1 00:20:43.861 --rc genhtml_legend=1 00:20:43.861 --rc geninfo_all_blocks=1 00:20:43.862 --rc geninfo_unexecuted_blocks=1 00:20:43.862 00:20:43.862 ' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.862 --rc genhtml_branch_coverage=1 00:20:43.862 --rc genhtml_function_coverage=1 00:20:43.862 --rc genhtml_legend=1 00:20:43.862 --rc geninfo_all_blocks=1 00:20:43.862 --rc geninfo_unexecuted_blocks=1 00:20:43.862 00:20:43.862 ' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:43.862 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:43.862 --rc genhtml_branch_coverage=1 00:20:43.862 --rc genhtml_function_coverage=1 00:20:43.862 --rc genhtml_legend=1 00:20:43.862 --rc geninfo_all_blocks=1 00:20:43.862 --rc geninfo_unexecuted_blocks=1 00:20:43.862 00:20:43.862 ' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:43.862 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:43.862 Cannot find device "nvmf_init_br" 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # true 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:43.862 Cannot find device "nvmf_init_br2" 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # true 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:43.862 Cannot find device "nvmf_tgt_br" 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@164 -- # true 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:43.862 Cannot find device "nvmf_tgt_br2" 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@165 -- # true 00:20:43.862 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:43.862 Cannot find device "nvmf_init_br" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@166 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:43.863 Cannot find device "nvmf_init_br2" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@167 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:43.863 Cannot find device "nvmf_tgt_br" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@168 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:43.863 Cannot find device "nvmf_tgt_br2" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:43.863 Cannot find device "nvmf_br" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@170 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:43.863 Cannot find device "nvmf_init_if" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:43.863 Cannot find device "nvmf_init_if2" 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:43.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@173 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:43.863 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@174 -- # true 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:43.863 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:43.863 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:20:43.863 00:20:43.863 --- 10.0.0.3 ping statistics --- 00:20:43.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.863 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:43.863 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:43.863 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.038 ms 00:20:43.863 00:20:43.863 --- 10.0.0.4 ping statistics --- 00:20:43.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.863 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:43.863 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:43.863 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:43.863 00:20:43.863 --- 10.0.0.1 ping statistics --- 00:20:43.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.863 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:43.863 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:43.863 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.060 ms 00:20:43.863 00:20:43.863 --- 10.0.0.2 ping statistics --- 00:20:43.863 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:43.863 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@461 -- # return 0 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:43.863 01:40:51 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=78986 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 78986 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 78986 ']' 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:43.863 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:43.863 01:40:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:43.863 [2024-11-17 01:40:52.143108] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:43.863 [2024-11-17 01:40:52.143282] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:44.122 [2024-11-17 01:40:52.325728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:44.122 [2024-11-17 01:40:52.413361] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:44.122 [2024-11-17 01:40:52.413438] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:44.122 [2024-11-17 01:40:52.413471] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:44.122 [2024-11-17 01:40:52.413493] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:44.122 [2024-11-17 01:40:52.413506] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:44.122 [2024-11-17 01:40:52.414620] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:44.122 [2024-11-17 01:40:52.559467] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:44.690 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:44.690 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:44.690 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:44.690 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:44.690 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=79018 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.3 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=ee2e07e9-09dd-48e1-b34c-6e0da003c8d6 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=8199bbfd-a061-4738-a802-7126cd5ae0bb 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=bce01ffa-bfdc-46af-bcac-4d51dbef7abb 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.950 null0 00:20:44.950 null1 00:20:44.950 null2 00:20:44.950 [2024-11-17 01:40:53.224381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:44.950 [2024-11-17 01:40:53.248564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 79018 /var/tmp/tgt2.sock 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 79018 ']' 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:44.950 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:44.950 01:40:53 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:44.950 [2024-11-17 01:40:53.321421] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:44.950 [2024-11-17 01:40:53.321596] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79018 ] 00:20:45.209 [2024-11-17 01:40:53.512525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:45.209 [2024-11-17 01:40:53.599877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:45.468 [2024-11-17 01:40:53.782726] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:46.035 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:46.035 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:20:46.035 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:20:46.294 [2024-11-17 01:40:54.643376] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:46.294 [2024-11-17 01:40:54.659748] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:20:46.294 nvme0n1 nvme0n2 00:20:46.294 nvme1n1 00:20:46.294 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:20:46.294 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:20:46.294 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:20:46.553 01:40:54 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid ee2e07e9-09dd-48e1-b34c-6e0da003c8d6 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=ee2e07e909dd48e1b34c6e0da003c8d6 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EE2E07E909DD48E1B34C6E0DA003C8D6 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ EE2E07E909DD48E1B34C6E0DA003C8D6 == \E\E\2\E\0\7\E\9\0\9\D\D\4\8\E\1\B\3\4\C\6\E\0\D\A\0\0\3\C\8\D\6 ]] 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:20:47.489 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 8199bbfd-a061-4738-a802-7126cd5ae0bb 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:20:47.748 01:40:55 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8199bbfda0614738a8027126cd5ae0bb 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8199BBFDA0614738A8027126CD5AE0BB 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 8199BBFDA0614738A8027126CD5AE0BB == \8\1\9\9\B\B\F\D\A\0\6\1\4\7\3\8\A\8\0\2\7\1\2\6\C\D\5\A\E\0\B\B ]] 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid bce01ffa-bfdc-46af-bcac-4d51dbef7abb 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=bce01ffabfdc46afbcac4d51dbef7abb 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo BCE01FFABFDC46AFBCAC4D51DBEF7ABB 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ BCE01FFABFDC46AFBCAC4D51DBEF7ABB == \B\C\E\0\1\F\F\A\B\F\D\C\4\6\A\F\B\C\A\C\4\D\5\1\D\B\E\F\7\A\B\B ]] 00:20:47.748 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 79018 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 79018 ']' 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 79018 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79018 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:48.007 killing process with pid 79018 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79018' 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 79018 00:20:48.007 01:40:56 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 79018 00:20:49.913 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:20:49.913 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:49.913 01:40:57 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:49.913 rmmod nvme_tcp 00:20:49.913 rmmod nvme_fabrics 00:20:49.913 rmmod nvme_keyring 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 78986 ']' 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 78986 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 78986 ']' 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 78986 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78986 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:49.913 killing process with pid 78986 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78986' 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 78986 00:20:49.913 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 78986 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:50.481 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:50.741 01:40:58 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@300 -- # return 0 00:20:50.741 00:20:50.741 real 0m7.741s 00:20:50.741 user 0m11.990s 00:20:50.741 sys 0m1.799s 00:20:50.741 ************************************ 00:20:50.741 END TEST nvmf_nsid 00:20:50.741 ************************************ 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:20:50.741 00:20:50.741 real 7m43.998s 00:20:50.741 user 18m49.124s 00:20:50.741 sys 1m53.461s 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:50.741 01:40:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:50.741 ************************************ 00:20:50.741 END TEST nvmf_target_extra 00:20:50.741 ************************************ 00:20:51.001 01:40:59 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:51.001 01:40:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.001 01:40:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.001 01:40:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:51.001 ************************************ 00:20:51.001 START TEST nvmf_host 00:20:51.001 ************************************ 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:51.001 * Looking for test storage... 00:20:51.001 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.001 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.002 --rc genhtml_branch_coverage=1 00:20:51.002 --rc genhtml_function_coverage=1 00:20:51.002 --rc genhtml_legend=1 00:20:51.002 --rc geninfo_all_blocks=1 00:20:51.002 --rc geninfo_unexecuted_blocks=1 00:20:51.002 00:20:51.002 ' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.002 --rc genhtml_branch_coverage=1 00:20:51.002 --rc genhtml_function_coverage=1 00:20:51.002 --rc genhtml_legend=1 00:20:51.002 --rc geninfo_all_blocks=1 00:20:51.002 --rc geninfo_unexecuted_blocks=1 00:20:51.002 00:20:51.002 ' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.002 --rc genhtml_branch_coverage=1 00:20:51.002 --rc genhtml_function_coverage=1 00:20:51.002 --rc genhtml_legend=1 00:20:51.002 --rc geninfo_all_blocks=1 00:20:51.002 --rc geninfo_unexecuted_blocks=1 00:20:51.002 00:20:51.002 ' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.002 --rc genhtml_branch_coverage=1 00:20:51.002 --rc genhtml_function_coverage=1 00:20:51.002 --rc genhtml_legend=1 00:20:51.002 --rc geninfo_all_blocks=1 00:20:51.002 --rc geninfo_unexecuted_blocks=1 00:20:51.002 00:20:51.002 ' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.002 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 1 -eq 0 ]] 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:51.002 01:40:59 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.263 ************************************ 00:20:51.263 START TEST nvmf_identify 00:20:51.263 ************************************ 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:51.263 * Looking for test storage... 00:20:51.263 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:51.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.263 --rc genhtml_branch_coverage=1 00:20:51.263 --rc genhtml_function_coverage=1 00:20:51.263 --rc genhtml_legend=1 00:20:51.263 --rc geninfo_all_blocks=1 00:20:51.263 --rc geninfo_unexecuted_blocks=1 00:20:51.263 00:20:51.263 ' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:51.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.263 --rc genhtml_branch_coverage=1 00:20:51.263 --rc genhtml_function_coverage=1 00:20:51.263 --rc genhtml_legend=1 00:20:51.263 --rc geninfo_all_blocks=1 00:20:51.263 --rc geninfo_unexecuted_blocks=1 00:20:51.263 00:20:51.263 ' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:51.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.263 --rc genhtml_branch_coverage=1 00:20:51.263 --rc genhtml_function_coverage=1 00:20:51.263 --rc genhtml_legend=1 00:20:51.263 --rc geninfo_all_blocks=1 00:20:51.263 --rc geninfo_unexecuted_blocks=1 00:20:51.263 00:20:51.263 ' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:51.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:51.263 --rc genhtml_branch_coverage=1 00:20:51.263 --rc genhtml_function_coverage=1 00:20:51.263 --rc genhtml_legend=1 00:20:51.263 --rc geninfo_all_blocks=1 00:20:51.263 --rc geninfo_unexecuted_blocks=1 00:20:51.263 00:20:51.263 ' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.263 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:51.264 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.264 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:51.523 Cannot find device "nvmf_init_br" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:51.523 Cannot find device "nvmf_init_br2" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:51.523 Cannot find device "nvmf_tgt_br" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@164 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:51.523 Cannot find device "nvmf_tgt_br2" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@165 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:51.523 Cannot find device "nvmf_init_br" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@166 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:51.523 Cannot find device "nvmf_init_br2" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@167 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:51.523 Cannot find device "nvmf_tgt_br" 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@168 -- # true 00:20:51.523 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:51.523 Cannot find device "nvmf_tgt_br2" 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # true 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:51.524 Cannot find device "nvmf_br" 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@170 -- # true 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:51.524 Cannot find device "nvmf_init_if" 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # true 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:51.524 Cannot find device "nvmf_init_if2" 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # true 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:51.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@173 -- # true 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:51.524 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@174 -- # true 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:51.524 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:51.784 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:51.784 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:51.784 01:40:59 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:51.784 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:51.784 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.113 ms 00:20:51.784 00:20:51.784 --- 10.0.0.3 ping statistics --- 00:20:51.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.784 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:51.784 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:51.784 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:20:51.784 00:20:51.784 --- 10.0.0.4 ping statistics --- 00:20:51.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.784 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:51.784 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.784 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:20:51.784 00:20:51.784 --- 10.0.0.1 ping statistics --- 00:20:51.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.784 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:51.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:20:51.784 00:20:51.784 --- 10.0.0.2 ping statistics --- 00:20:51.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.784 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@461 -- # return 0 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:51.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=79402 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 79402 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 79402 ']' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:51.784 01:41:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:52.042 [2024-11-17 01:41:00.291007] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:52.042 [2024-11-17 01:41:00.291173] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:52.042 [2024-11-17 01:41:00.480650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:52.301 [2024-11-17 01:41:00.611115] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:52.301 [2024-11-17 01:41:00.611182] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:52.301 [2024-11-17 01:41:00.611207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:52.301 [2024-11-17 01:41:00.611223] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:52.301 [2024-11-17 01:41:00.611240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:52.301 [2024-11-17 01:41:00.613485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.301 [2024-11-17 01:41:00.613619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:52.301 [2024-11-17 01:41:00.613752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:52.301 [2024-11-17 01:41:00.613957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:52.560 [2024-11-17 01:41:00.822592] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 [2024-11-17 01:41:01.284957] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 Malloc0 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 [2024-11-17 01:41:01.431752] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.128 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 [ 00:20:53.128 { 00:20:53.128 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:53.128 "subtype": "Discovery", 00:20:53.128 "listen_addresses": [ 00:20:53.128 { 00:20:53.128 "trtype": "TCP", 00:20:53.128 "adrfam": "IPv4", 00:20:53.128 "traddr": "10.0.0.3", 00:20:53.128 "trsvcid": "4420" 00:20:53.128 } 00:20:53.128 ], 00:20:53.128 "allow_any_host": true, 00:20:53.128 "hosts": [] 00:20:53.128 }, 00:20:53.128 { 00:20:53.128 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:53.128 "subtype": "NVMe", 00:20:53.128 "listen_addresses": [ 00:20:53.128 { 00:20:53.128 "trtype": "TCP", 00:20:53.128 "adrfam": "IPv4", 00:20:53.128 "traddr": "10.0.0.3", 00:20:53.128 "trsvcid": "4420" 00:20:53.128 } 00:20:53.128 ], 00:20:53.128 "allow_any_host": true, 00:20:53.128 "hosts": [], 00:20:53.128 "serial_number": "SPDK00000000000001", 00:20:53.128 "model_number": "SPDK bdev Controller", 00:20:53.128 "max_namespaces": 32, 00:20:53.128 "min_cntlid": 1, 00:20:53.128 "max_cntlid": 65519, 00:20:53.128 "namespaces": [ 00:20:53.128 { 00:20:53.128 "nsid": 1, 00:20:53.128 "bdev_name": "Malloc0", 00:20:53.128 "name": "Malloc0", 00:20:53.128 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:53.128 "eui64": "ABCDEF0123456789", 00:20:53.128 "uuid": "2fe4e366-ea53-441f-9aa2-decb7ef40ed7" 00:20:53.128 } 00:20:53.128 ] 00:20:53.129 } 00:20:53.129 ] 00:20:53.129 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.129 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:53.129 [2024-11-17 01:41:01.523587] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:53.129 [2024-11-17 01:41:01.523748] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79437 ] 00:20:53.392 [2024-11-17 01:41:01.708187] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:20:53.392 [2024-11-17 01:41:01.708323] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:53.392 [2024-11-17 01:41:01.708337] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:53.392 [2024-11-17 01:41:01.708362] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:53.392 [2024-11-17 01:41:01.708377] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:53.392 [2024-11-17 01:41:01.708791] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:20:53.392 [2024-11-17 01:41:01.712966] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:53.392 [2024-11-17 01:41:01.713057] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:53.392 [2024-11-17 01:41:01.713078] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:53.392 [2024-11-17 01:41:01.713088] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:53.392 [2024-11-17 01:41:01.713113] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:53.392 [2024-11-17 01:41:01.713228] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.713250] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.713260] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.392 [2024-11-17 01:41:01.713306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:53.392 [2024-11-17 01:41:01.713351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.392 [2024-11-17 01:41:01.720871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.392 [2024-11-17 01:41:01.720905] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.392 [2024-11-17 01:41:01.720930] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.720940] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.392 [2024-11-17 01:41:01.720963] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:53.392 [2024-11-17 01:41:01.720980] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:20:53.392 [2024-11-17 01:41:01.720991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:20:53.392 [2024-11-17 01:41:01.721013] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721022] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.392 [2024-11-17 01:41:01.721049] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.392 [2024-11-17 01:41:01.721088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.392 [2024-11-17 01:41:01.721178] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.392 [2024-11-17 01:41:01.721194] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.392 [2024-11-17 01:41:01.721202] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.392 [2024-11-17 01:41:01.721243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:20:53.392 [2024-11-17 01:41:01.721267] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:20:53.392 [2024-11-17 01:41:01.721281] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721289] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.392 [2024-11-17 01:41:01.721314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.392 [2024-11-17 01:41:01.721346] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.392 [2024-11-17 01:41:01.721413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.392 [2024-11-17 01:41:01.721425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.392 [2024-11-17 01:41:01.721432] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721439] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.392 [2024-11-17 01:41:01.721450] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:20:53.392 [2024-11-17 01:41:01.721464] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:53.392 [2024-11-17 01:41:01.721477] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721485] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721493] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.392 [2024-11-17 01:41:01.721506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.392 [2024-11-17 01:41:01.721535] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.392 [2024-11-17 01:41:01.721597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.392 [2024-11-17 01:41:01.721609] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.392 [2024-11-17 01:41:01.721615] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721622] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.392 [2024-11-17 01:41:01.721632] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:53.392 [2024-11-17 01:41:01.721649] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721662] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.392 [2024-11-17 01:41:01.721685] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.392 [2024-11-17 01:41:01.721713] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.392 [2024-11-17 01:41:01.721773] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.392 [2024-11-17 01:41:01.721785] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.392 [2024-11-17 01:41:01.721791] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.392 [2024-11-17 01:41:01.721807] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:53.392 [2024-11-17 01:41:01.721819] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:53.392 [2024-11-17 01:41:01.721833] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:53.392 [2024-11-17 01:41:01.721961] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:20:53.392 [2024-11-17 01:41:01.721972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:53.392 [2024-11-17 01:41:01.721989] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.721999] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.722012] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.392 [2024-11-17 01:41:01.722026] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.392 [2024-11-17 01:41:01.722056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.392 [2024-11-17 01:41:01.722132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.392 [2024-11-17 01:41:01.722144] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.392 [2024-11-17 01:41:01.722151] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.722157] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.392 [2024-11-17 01:41:01.722167] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:53.392 [2024-11-17 01:41:01.722184] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.722193] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.392 [2024-11-17 01:41:01.722201] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.722214] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.393 [2024-11-17 01:41:01.722245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.393 [2024-11-17 01:41:01.722305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.393 [2024-11-17 01:41:01.722317] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.393 [2024-11-17 01:41:01.722323] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722329] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.393 [2024-11-17 01:41:01.722339] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:53.393 [2024-11-17 01:41:01.722352] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:53.393 [2024-11-17 01:41:01.722380] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:20:53.393 [2024-11-17 01:41:01.722393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:53.393 [2024-11-17 01:41:01.722413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.722436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.393 [2024-11-17 01:41:01.722466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.393 [2024-11-17 01:41:01.722580] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.393 [2024-11-17 01:41:01.722594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.393 [2024-11-17 01:41:01.722601] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722609] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:53.393 [2024-11-17 01:41:01.722618] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.393 [2024-11-17 01:41:01.722626] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722643] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722654] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.393 [2024-11-17 01:41:01.722687] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.393 [2024-11-17 01:41:01.722693] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.393 [2024-11-17 01:41:01.722726] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:20:53.393 [2024-11-17 01:41:01.722736] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:20:53.393 [2024-11-17 01:41:01.722744] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:20:53.393 [2024-11-17 01:41:01.722753] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:20:53.393 [2024-11-17 01:41:01.722761] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:20:53.393 [2024-11-17 01:41:01.722770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:20:53.393 [2024-11-17 01:41:01.722784] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:53.393 [2024-11-17 01:41:01.722819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722834] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.722857] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.393 [2024-11-17 01:41:01.722890] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.393 [2024-11-17 01:41:01.722964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.393 [2024-11-17 01:41:01.722978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.393 [2024-11-17 01:41:01.722985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.722992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.393 [2024-11-17 01:41:01.723008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723017] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723024] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.723043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.393 [2024-11-17 01:41:01.723054] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723061] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723067] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.723077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.393 [2024-11-17 01:41:01.723085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723094] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723101] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.723111] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.393 [2024-11-17 01:41:01.723120] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723132] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.723141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.393 [2024-11-17 01:41:01.723150] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:53.393 [2024-11-17 01:41:01.723174] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:53.393 [2024-11-17 01:41:01.723186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.393 [2024-11-17 01:41:01.723206] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.393 [2024-11-17 01:41:01.723239] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.393 [2024-11-17 01:41:01.723255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:53.393 [2024-11-17 01:41:01.723263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:53.393 [2024-11-17 01:41:01.723270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.393 [2024-11-17 01:41:01.723277] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.393 [2024-11-17 01:41:01.723382] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.393 [2024-11-17 01:41:01.723411] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.393 [2024-11-17 01:41:01.723419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.393 [2024-11-17 01:41:01.723427] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.393 [2024-11-17 01:41:01.723437] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:20:53.393 [2024-11-17 01:41:01.723447] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:20:53.393 [2024-11-17 01:41:01.723469] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.394 [2024-11-17 01:41:01.723492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.394 [2024-11-17 01:41:01.723526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.394 [2024-11-17 01:41:01.723626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.394 [2024-11-17 01:41:01.723667] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.394 [2024-11-17 01:41:01.723676] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723684] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:53.394 [2024-11-17 01:41:01.723694] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.394 [2024-11-17 01:41:01.723706] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723721] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723729] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.394 [2024-11-17 01:41:01.723755] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.394 [2024-11-17 01:41:01.723762] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723773] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.394 [2024-11-17 01:41:01.723826] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:20:53.394 [2024-11-17 01:41:01.723894] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723913] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.394 [2024-11-17 01:41:01.723930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.394 [2024-11-17 01:41:01.723943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.723958] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:53.394 [2024-11-17 01:41:01.723974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.394 [2024-11-17 01:41:01.724041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.394 [2024-11-17 01:41:01.724058] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:53.394 [2024-11-17 01:41:01.724293] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.394 [2024-11-17 01:41:01.724318] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.394 [2024-11-17 01:41:01.724326] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724334] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=1024, cccid=4 00:20:53.394 [2024-11-17 01:41:01.724342] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=1024 00:20:53.394 [2024-11-17 01:41:01.724350] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724362] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724370] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.394 [2024-11-17 01:41:01.724392] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.394 [2024-11-17 01:41:01.724398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:53.394 [2024-11-17 01:41:01.724435] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.394 [2024-11-17 01:41:01.724448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.394 [2024-11-17 01:41:01.724454] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724461] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.394 [2024-11-17 01:41:01.724493] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724509] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.394 [2024-11-17 01:41:01.724523] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.394 [2024-11-17 01:41:01.724562] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.394 [2024-11-17 01:41:01.724669] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.394 [2024-11-17 01:41:01.724681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.394 [2024-11-17 01:41:01.724687] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724693] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=3072, cccid=4 00:20:53.394 [2024-11-17 01:41:01.724701] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=3072 00:20:53.394 [2024-11-17 01:41:01.724708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724719] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724726] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724738] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.394 [2024-11-17 01:41:01.724753] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.394 [2024-11-17 01:41:01.724760] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.724766] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.394 [2024-11-17 01:41:01.724789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.728873] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.394 [2024-11-17 01:41:01.728893] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.394 [2024-11-17 01:41:01.728939] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.394 [2024-11-17 01:41:01.729063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.394 [2024-11-17 01:41:01.729091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.394 [2024-11-17 01:41:01.729098] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.729105] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8, cccid=4 00:20:53.394 [2024-11-17 01:41:01.729113] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=8 00:20:53.394 [2024-11-17 01:41:01.729126] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.729141] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.729149] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.729175] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.394 [2024-11-17 01:41:01.729188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.394 [2024-11-17 01:41:01.729194] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.394 [2024-11-17 01:41:01.729201] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.394 ===================================================== 00:20:53.394 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:53.394 ===================================================== 00:20:53.394 Controller Capabilities/Features 00:20:53.394 ================================ 00:20:53.394 Vendor ID: 0000 00:20:53.394 Subsystem Vendor ID: 0000 00:20:53.394 Serial Number: .................... 00:20:53.394 Model Number: ........................................ 00:20:53.394 Firmware Version: 25.01 00:20:53.394 Recommended Arb Burst: 0 00:20:53.394 IEEE OUI Identifier: 00 00 00 00:20:53.394 Multi-path I/O 00:20:53.394 May have multiple subsystem ports: No 00:20:53.394 May have multiple controllers: No 00:20:53.394 Associated with SR-IOV VF: No 00:20:53.394 Max Data Transfer Size: 131072 00:20:53.394 Max Number of Namespaces: 0 00:20:53.394 Max Number of I/O Queues: 1024 00:20:53.394 NVMe Specification Version (VS): 1.3 00:20:53.394 NVMe Specification Version (Identify): 1.3 00:20:53.394 Maximum Queue Entries: 128 00:20:53.394 Contiguous Queues Required: Yes 00:20:53.394 Arbitration Mechanisms Supported 00:20:53.394 Weighted Round Robin: Not Supported 00:20:53.394 Vendor Specific: Not Supported 00:20:53.394 Reset Timeout: 15000 ms 00:20:53.394 Doorbell Stride: 4 bytes 00:20:53.394 NVM Subsystem Reset: Not Supported 00:20:53.394 Command Sets Supported 00:20:53.395 NVM Command Set: Supported 00:20:53.395 Boot Partition: Not Supported 00:20:53.395 Memory Page Size Minimum: 4096 bytes 00:20:53.395 Memory Page Size Maximum: 4096 bytes 00:20:53.395 Persistent Memory Region: Not Supported 00:20:53.395 Optional Asynchronous Events Supported 00:20:53.395 Namespace Attribute Notices: Not Supported 00:20:53.395 Firmware Activation Notices: Not Supported 00:20:53.395 ANA Change Notices: Not Supported 00:20:53.395 PLE Aggregate Log Change Notices: Not Supported 00:20:53.395 LBA Status Info Alert Notices: Not Supported 00:20:53.395 EGE Aggregate Log Change Notices: Not Supported 00:20:53.395 Normal NVM Subsystem Shutdown event: Not Supported 00:20:53.395 Zone Descriptor Change Notices: Not Supported 00:20:53.395 Discovery Log Change Notices: Supported 00:20:53.395 Controller Attributes 00:20:53.395 128-bit Host Identifier: Not Supported 00:20:53.395 Non-Operational Permissive Mode: Not Supported 00:20:53.395 NVM Sets: Not Supported 00:20:53.395 Read Recovery Levels: Not Supported 00:20:53.395 Endurance Groups: Not Supported 00:20:53.395 Predictable Latency Mode: Not Supported 00:20:53.395 Traffic Based Keep ALive: Not Supported 00:20:53.395 Namespace Granularity: Not Supported 00:20:53.395 SQ Associations: Not Supported 00:20:53.395 UUID List: Not Supported 00:20:53.395 Multi-Domain Subsystem: Not Supported 00:20:53.395 Fixed Capacity Management: Not Supported 00:20:53.395 Variable Capacity Management: Not Supported 00:20:53.395 Delete Endurance Group: Not Supported 00:20:53.395 Delete NVM Set: Not Supported 00:20:53.395 Extended LBA Formats Supported: Not Supported 00:20:53.395 Flexible Data Placement Supported: Not Supported 00:20:53.395 00:20:53.395 Controller Memory Buffer Support 00:20:53.395 ================================ 00:20:53.395 Supported: No 00:20:53.395 00:20:53.395 Persistent Memory Region Support 00:20:53.395 ================================ 00:20:53.395 Supported: No 00:20:53.395 00:20:53.395 Admin Command Set Attributes 00:20:53.395 ============================ 00:20:53.395 Security Send/Receive: Not Supported 00:20:53.395 Format NVM: Not Supported 00:20:53.395 Firmware Activate/Download: Not Supported 00:20:53.395 Namespace Management: Not Supported 00:20:53.395 Device Self-Test: Not Supported 00:20:53.395 Directives: Not Supported 00:20:53.395 NVMe-MI: Not Supported 00:20:53.395 Virtualization Management: Not Supported 00:20:53.395 Doorbell Buffer Config: Not Supported 00:20:53.395 Get LBA Status Capability: Not Supported 00:20:53.395 Command & Feature Lockdown Capability: Not Supported 00:20:53.395 Abort Command Limit: 1 00:20:53.395 Async Event Request Limit: 4 00:20:53.395 Number of Firmware Slots: N/A 00:20:53.395 Firmware Slot 1 Read-Only: N/A 00:20:53.395 Firmware Activation Without Reset: N/A 00:20:53.395 Multiple Update Detection Support: N/A 00:20:53.395 Firmware Update Granularity: No Information Provided 00:20:53.395 Per-Namespace SMART Log: No 00:20:53.395 Asymmetric Namespace Access Log Page: Not Supported 00:20:53.395 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:53.395 Command Effects Log Page: Not Supported 00:20:53.395 Get Log Page Extended Data: Supported 00:20:53.395 Telemetry Log Pages: Not Supported 00:20:53.395 Persistent Event Log Pages: Not Supported 00:20:53.395 Supported Log Pages Log Page: May Support 00:20:53.395 Commands Supported & Effects Log Page: Not Supported 00:20:53.395 Feature Identifiers & Effects Log Page:May Support 00:20:53.395 NVMe-MI Commands & Effects Log Page: May Support 00:20:53.395 Data Area 4 for Telemetry Log: Not Supported 00:20:53.395 Error Log Page Entries Supported: 128 00:20:53.395 Keep Alive: Not Supported 00:20:53.395 00:20:53.395 NVM Command Set Attributes 00:20:53.395 ========================== 00:20:53.395 Submission Queue Entry Size 00:20:53.395 Max: 1 00:20:53.395 Min: 1 00:20:53.395 Completion Queue Entry Size 00:20:53.395 Max: 1 00:20:53.395 Min: 1 00:20:53.395 Number of Namespaces: 0 00:20:53.395 Compare Command: Not Supported 00:20:53.395 Write Uncorrectable Command: Not Supported 00:20:53.395 Dataset Management Command: Not Supported 00:20:53.395 Write Zeroes Command: Not Supported 00:20:53.395 Set Features Save Field: Not Supported 00:20:53.395 Reservations: Not Supported 00:20:53.395 Timestamp: Not Supported 00:20:53.395 Copy: Not Supported 00:20:53.395 Volatile Write Cache: Not Present 00:20:53.395 Atomic Write Unit (Normal): 1 00:20:53.395 Atomic Write Unit (PFail): 1 00:20:53.395 Atomic Compare & Write Unit: 1 00:20:53.395 Fused Compare & Write: Supported 00:20:53.395 Scatter-Gather List 00:20:53.395 SGL Command Set: Supported 00:20:53.395 SGL Keyed: Supported 00:20:53.395 SGL Bit Bucket Descriptor: Not Supported 00:20:53.395 SGL Metadata Pointer: Not Supported 00:20:53.395 Oversized SGL: Not Supported 00:20:53.395 SGL Metadata Address: Not Supported 00:20:53.395 SGL Offset: Supported 00:20:53.395 Transport SGL Data Block: Not Supported 00:20:53.395 Replay Protected Memory Block: Not Supported 00:20:53.395 00:20:53.395 Firmware Slot Information 00:20:53.395 ========================= 00:20:53.395 Active slot: 0 00:20:53.395 00:20:53.395 00:20:53.395 Error Log 00:20:53.395 ========= 00:20:53.395 00:20:53.395 Active Namespaces 00:20:53.395 ================= 00:20:53.395 Discovery Log Page 00:20:53.395 ================== 00:20:53.395 Generation Counter: 2 00:20:53.395 Number of Records: 2 00:20:53.395 Record Format: 0 00:20:53.395 00:20:53.395 Discovery Log Entry 0 00:20:53.395 ---------------------- 00:20:53.395 Transport Type: 3 (TCP) 00:20:53.395 Address Family: 1 (IPv4) 00:20:53.395 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:53.395 Entry Flags: 00:20:53.395 Duplicate Returned Information: 1 00:20:53.395 Explicit Persistent Connection Support for Discovery: 1 00:20:53.395 Transport Requirements: 00:20:53.395 Secure Channel: Not Required 00:20:53.395 Port ID: 0 (0x0000) 00:20:53.395 Controller ID: 65535 (0xffff) 00:20:53.395 Admin Max SQ Size: 128 00:20:53.395 Transport Service Identifier: 4420 00:20:53.395 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:53.395 Transport Address: 10.0.0.3 00:20:53.395 Discovery Log Entry 1 00:20:53.395 ---------------------- 00:20:53.395 Transport Type: 3 (TCP) 00:20:53.395 Address Family: 1 (IPv4) 00:20:53.395 Subsystem Type: 2 (NVM Subsystem) 00:20:53.395 Entry Flags: 00:20:53.395 Duplicate Returned Information: 0 00:20:53.395 Explicit Persistent Connection Support for Discovery: 0 00:20:53.395 Transport Requirements: 00:20:53.395 Secure Channel: Not Required 00:20:53.395 Port ID: 0 (0x0000) 00:20:53.395 Controller ID: 65535 (0xffff) 00:20:53.395 Admin Max SQ Size: 128 00:20:53.395 Transport Service Identifier: 4420 00:20:53.395 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:53.395 Transport Address: 10.0.0.3 [2024-11-17 01:41:01.729383] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:20:53.396 [2024-11-17 01:41:01.729424] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.729438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.396 [2024-11-17 01:41:01.729449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.729462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.396 [2024-11-17 01:41:01.729470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.729478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.396 [2024-11-17 01:41:01.729486] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.729494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.396 [2024-11-17 01:41:01.729521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729531] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.729553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.729597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.729671] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.729684] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.729691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729699] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.729713] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.729746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.729780] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.729879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.729893] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.729899] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729906] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.729915] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:20:53.396 [2024-11-17 01:41:01.729924] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:20:53.396 [2024-11-17 01:41:01.729944] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.729964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.729980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.730011] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.730078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.730093] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.730099] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730106] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.730125] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730133] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.730152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.730178] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.730245] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.730261] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.730267] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730274] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.730295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730304] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730310] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.730323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.730351] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.730420] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.730432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.730438] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730445] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.730462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730470] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730476] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.730493] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.730520] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.730579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.730590] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.730596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730602] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.730619] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730636] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.730658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.730687] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.730748] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.730760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.730766] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730776] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.396 [2024-11-17 01:41:01.730823] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730850] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730857] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.396 [2024-11-17 01:41:01.730870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.396 [2024-11-17 01:41:01.730901] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.396 [2024-11-17 01:41:01.730959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.396 [2024-11-17 01:41:01.730977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.396 [2024-11-17 01:41:01.730984] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.396 [2024-11-17 01:41:01.730992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.731010] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731018] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731025] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.731042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.731069] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.731138] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.731150] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.731157] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731164] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.731182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.731224] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.731265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.731325] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.731336] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.731342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731349] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.731365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731373] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731379] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.731395] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.731425] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.731482] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.731494] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.731500] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.731523] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.731554] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.731580] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.731661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.731675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.731681] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731688] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.731705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.731740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.731782] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.731883] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.731907] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.731915] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731923] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.731945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731955] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.731962] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.731975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.732019] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.732082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.732106] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.732114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.732140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732153] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732160] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.732188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.732215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.732281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.732293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.732299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732306] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.732326] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732335] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.732353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.397 [2024-11-17 01:41:01.732380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.397 [2024-11-17 01:41:01.732442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.397 [2024-11-17 01:41:01.732459] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.397 [2024-11-17 01:41:01.732466] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.397 [2024-11-17 01:41:01.732491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732499] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.397 [2024-11-17 01:41:01.732506] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.397 [2024-11-17 01:41:01.732518] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.398 [2024-11-17 01:41:01.732545] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.398 [2024-11-17 01:41:01.732608] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.398 [2024-11-17 01:41:01.732624] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.398 [2024-11-17 01:41:01.732632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.732638] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.398 [2024-11-17 01:41:01.732656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.732664] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.732674] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.398 [2024-11-17 01:41:01.732687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.398 [2024-11-17 01:41:01.732714] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.398 [2024-11-17 01:41:01.732778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.398 [2024-11-17 01:41:01.735849] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.398 [2024-11-17 01:41:01.735874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.735884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.398 [2024-11-17 01:41:01.735918] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.735929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.735936] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.398 [2024-11-17 01:41:01.735971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.398 [2024-11-17 01:41:01.736036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.398 [2024-11-17 01:41:01.736097] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.398 [2024-11-17 01:41:01.736124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.398 [2024-11-17 01:41:01.736133] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.398 [2024-11-17 01:41:01.736140] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.398 [2024-11-17 01:41:01.736155] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:20:53.398 00:20:53.398 01:41:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:53.398 [2024-11-17 01:41:01.841620] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:53.398 [2024-11-17 01:41:01.841725] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79446 ] 00:20:53.661 [2024-11-17 01:41:02.021622] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:20:53.661 [2024-11-17 01:41:02.021763] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:53.661 [2024-11-17 01:41:02.021778] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:53.661 [2024-11-17 01:41:02.021820] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:53.661 [2024-11-17 01:41:02.021853] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:53.661 [2024-11-17 01:41:02.022250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:20:53.661 [2024-11-17 01:41:02.022338] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x61500000f080 0 00:20:53.661 [2024-11-17 01:41:02.036850] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:53.661 [2024-11-17 01:41:02.036900] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:53.661 [2024-11-17 01:41:02.036911] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:53.661 [2024-11-17 01:41:02.036918] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:53.661 [2024-11-17 01:41:02.036999] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.037015] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.037023] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.661 [2024-11-17 01:41:02.037045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:53.661 [2024-11-17 01:41:02.037095] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.661 [2024-11-17 01:41:02.044887] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.661 [2024-11-17 01:41:02.044936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.661 [2024-11-17 01:41:02.044946] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.044955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.661 [2024-11-17 01:41:02.044979] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:53.661 [2024-11-17 01:41:02.045001] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:20:53.661 [2024-11-17 01:41:02.045014] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:20:53.661 [2024-11-17 01:41:02.045036] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.045046] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.045054] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.661 [2024-11-17 01:41:02.045070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.661 [2024-11-17 01:41:02.045108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.661 [2024-11-17 01:41:02.045518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.661 [2024-11-17 01:41:02.045544] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.661 [2024-11-17 01:41:02.045558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.045566] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.661 [2024-11-17 01:41:02.045577] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:20:53.661 [2024-11-17 01:41:02.045595] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:20:53.661 [2024-11-17 01:41:02.045610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.045619] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.045626] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.661 [2024-11-17 01:41:02.045659] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.661 [2024-11-17 01:41:02.045706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.661 [2024-11-17 01:41:02.046152] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.661 [2024-11-17 01:41:02.046192] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.661 [2024-11-17 01:41:02.046201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.046215] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.661 [2024-11-17 01:41:02.046242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:20:53.661 [2024-11-17 01:41:02.046262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:20:53.661 [2024-11-17 01:41:02.046275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.046283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.046290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.661 [2024-11-17 01:41:02.046304] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.661 [2024-11-17 01:41:02.046350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.661 [2024-11-17 01:41:02.046677] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.661 [2024-11-17 01:41:02.046699] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.661 [2024-11-17 01:41:02.046707] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.046714] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.661 [2024-11-17 01:41:02.046725] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:53.661 [2024-11-17 01:41:02.046743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.046771] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.046782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.661 [2024-11-17 01:41:02.046795] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.661 [2024-11-17 01:41:02.046891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.661 [2024-11-17 01:41:02.047369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.661 [2024-11-17 01:41:02.047391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.661 [2024-11-17 01:41:02.047399] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.661 [2024-11-17 01:41:02.047406] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.662 [2024-11-17 01:41:02.047416] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:20:53.662 [2024-11-17 01:41:02.047440] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:20:53.662 [2024-11-17 01:41:02.047454] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:53.662 [2024-11-17 01:41:02.047565] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:20:53.662 [2024-11-17 01:41:02.047574] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:53.662 [2024-11-17 01:41:02.047591] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.047601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.047609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.047672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.662 [2024-11-17 01:41:02.047711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.662 [2024-11-17 01:41:02.048102] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.662 [2024-11-17 01:41:02.048146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.662 [2024-11-17 01:41:02.048155] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.048162] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.662 [2024-11-17 01:41:02.048188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:53.662 [2024-11-17 01:41:02.048206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.048215] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.048223] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.048237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.662 [2024-11-17 01:41:02.048267] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.662 [2024-11-17 01:41:02.048659] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.662 [2024-11-17 01:41:02.048681] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.662 [2024-11-17 01:41:02.048689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.048696] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.662 [2024-11-17 01:41:02.048706] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:53.662 [2024-11-17 01:41:02.048715] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:20:53.662 [2024-11-17 01:41:02.048746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:20:53.662 [2024-11-17 01:41:02.048760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:20:53.662 [2024-11-17 01:41:02.048780] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.048789] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.052928] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.662 [2024-11-17 01:41:02.052976] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.662 [2024-11-17 01:41:02.053109] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.662 [2024-11-17 01:41:02.053141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.662 [2024-11-17 01:41:02.053149] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053157] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=0 00:20:53.662 [2024-11-17 01:41:02.053166] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.662 [2024-11-17 01:41:02.053174] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053188] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053197] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053534] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.662 [2024-11-17 01:41:02.053556] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.662 [2024-11-17 01:41:02.053564] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053571] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.662 [2024-11-17 01:41:02.053592] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:20:53.662 [2024-11-17 01:41:02.053603] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:20:53.662 [2024-11-17 01:41:02.053611] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:20:53.662 [2024-11-17 01:41:02.053626] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:20:53.662 [2024-11-17 01:41:02.053635] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:20:53.662 [2024-11-17 01:41:02.053645] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:20:53.662 [2024-11-17 01:41:02.053659] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:20:53.662 [2024-11-17 01:41:02.053673] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.053694] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.053710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.662 [2024-11-17 01:41:02.053741] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.662 [2024-11-17 01:41:02.054132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.662 [2024-11-17 01:41:02.054162] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.662 [2024-11-17 01:41:02.054171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.662 [2024-11-17 01:41:02.054196] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054205] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054212] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.054233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.662 [2024-11-17 01:41:02.054245] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.054283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.662 [2024-11-17 01:41:02.054309] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054319] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054326] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.054336] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.662 [2024-11-17 01:41:02.054345] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054352] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.054368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.662 [2024-11-17 01:41:02.054377] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:53.662 [2024-11-17 01:41:02.054398] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:53.662 [2024-11-17 01:41:02.054411] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.662 [2024-11-17 01:41:02.054423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.662 [2024-11-17 01:41:02.054436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.662 [2024-11-17 01:41:02.054487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:20:53.662 [2024-11-17 01:41:02.054501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b280, cid 1, qid 0 00:20:53.662 [2024-11-17 01:41:02.054509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b400, cid 2, qid 0 00:20:53.663 [2024-11-17 01:41:02.054516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.663 [2024-11-17 01:41:02.054523] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.663 [2024-11-17 01:41:02.054944] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.663 [2024-11-17 01:41:02.054986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.663 [2024-11-17 01:41:02.054995] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.055003] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.663 [2024-11-17 01:41:02.055013] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:20:53.663 [2024-11-17 01:41:02.055022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.055039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.055051] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.055062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.055070] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.055077] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.663 [2024-11-17 01:41:02.055107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:53.663 [2024-11-17 01:41:02.055142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.663 [2024-11-17 01:41:02.055447] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.663 [2024-11-17 01:41:02.055468] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.663 [2024-11-17 01:41:02.055476] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.055483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.663 [2024-11-17 01:41:02.055576] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.055603] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.055648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.055658] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.663 [2024-11-17 01:41:02.055673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.663 [2024-11-17 01:41:02.055704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.663 [2024-11-17 01:41:02.056023] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.663 [2024-11-17 01:41:02.056051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.663 [2024-11-17 01:41:02.056062] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056069] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:53.663 [2024-11-17 01:41:02.056077] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.663 [2024-11-17 01:41:02.056085] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056100] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056108] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.663 [2024-11-17 01:41:02.056130] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.663 [2024-11-17 01:41:02.056136] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.663 [2024-11-17 01:41:02.056180] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:20:53.663 [2024-11-17 01:41:02.056202] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.056229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.056247] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056259] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.663 [2024-11-17 01:41:02.056277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.663 [2024-11-17 01:41:02.056309] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.663 [2024-11-17 01:41:02.056726] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.663 [2024-11-17 01:41:02.056749] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.663 [2024-11-17 01:41:02.056757] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056763] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:53.663 [2024-11-17 01:41:02.056771] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.663 [2024-11-17 01:41:02.056778] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.056790] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.060906] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.060945] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.663 [2024-11-17 01:41:02.060963] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.663 [2024-11-17 01:41:02.060970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.060978] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.663 [2024-11-17 01:41:02.061015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061099] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.061108] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.663 [2024-11-17 01:41:02.061125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.663 [2024-11-17 01:41:02.061162] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.663 [2024-11-17 01:41:02.061528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.663 [2024-11-17 01:41:02.061550] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.663 [2024-11-17 01:41:02.061572] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.061579] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=4 00:20:53.663 [2024-11-17 01:41:02.061591] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.663 [2024-11-17 01:41:02.061600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.061612] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.061619] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.061678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.663 [2024-11-17 01:41:02.061690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.663 [2024-11-17 01:41:02.061696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.663 [2024-11-17 01:41:02.061703] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.663 [2024-11-17 01:41:02.061734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061770] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061791] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061802] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061811] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061820] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:20:53.663 [2024-11-17 01:41:02.061828] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:20:53.663 [2024-11-17 01:41:02.061852] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:20:53.664 [2024-11-17 01:41:02.061897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.061909] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.061924] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.061937] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.061944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.061951] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.061968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:53.664 [2024-11-17 01:41:02.062007] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.664 [2024-11-17 01:41:02.062029] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:53.664 [2024-11-17 01:41:02.062448] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.664 [2024-11-17 01:41:02.062469] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.664 [2024-11-17 01:41:02.062477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.062485] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.664 [2024-11-17 01:41:02.062497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.664 [2024-11-17 01:41:02.062506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.664 [2024-11-17 01:41:02.062517] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.062524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:53.664 [2024-11-17 01:41:02.062541] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.062550] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.062563] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.062592] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:53.664 [2024-11-17 01:41:02.062879] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.664 [2024-11-17 01:41:02.062902] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.664 [2024-11-17 01:41:02.062914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.062922] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:53.664 [2024-11-17 01:41:02.062940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.062949] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.062962] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.062990] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:53.664 [2024-11-17 01:41:02.063305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.664 [2024-11-17 01:41:02.063325] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.664 [2024-11-17 01:41:02.063333] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.063340] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:53.664 [2024-11-17 01:41:02.063358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.063366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.063384] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.063413] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:53.664 [2024-11-17 01:41:02.063766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.664 [2024-11-17 01:41:02.063789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.664 [2024-11-17 01:41:02.063834] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.063844] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:53.664 [2024-11-17 01:41:02.063879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.063891] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.063907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.063922] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.063930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.063948] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.063961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.063970] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.064003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.064021] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.064029] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:53.664 [2024-11-17 01:41:02.064041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.664 [2024-11-17 01:41:02.064075] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b880, cid 5, qid 0 00:20:53.664 [2024-11-17 01:41:02.064097] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b700, cid 4, qid 0 00:20:53.664 [2024-11-17 01:41:02.064107] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001ba00, cid 6, qid 0 00:20:53.664 [2024-11-17 01:41:02.064115] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:53.664 [2024-11-17 01:41:02.064681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.664 [2024-11-17 01:41:02.064701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.664 [2024-11-17 01:41:02.064726] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.064739] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=8192, cccid=5 00:20:53.664 [2024-11-17 01:41:02.064747] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b880) on tqpair(0x61500000f080): expected_datao=0, payload_size=8192 00:20:53.664 [2024-11-17 01:41:02.064755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.064799] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.068914] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.068937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.664 [2024-11-17 01:41:02.068948] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.664 [2024-11-17 01:41:02.068954] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.068961] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=4 00:20:53.664 [2024-11-17 01:41:02.068969] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b700) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:53.664 [2024-11-17 01:41:02.068976] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.068988] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.068994] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069003] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.664 [2024-11-17 01:41:02.069015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.664 [2024-11-17 01:41:02.069022] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069029] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=512, cccid=6 00:20:53.664 [2024-11-17 01:41:02.069036] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001ba00) on tqpair(0x61500000f080): expected_datao=0, payload_size=512 00:20:53.664 [2024-11-17 01:41:02.069043] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069057] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069065] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069089] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:53.664 [2024-11-17 01:41:02.069098] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:53.664 [2024-11-17 01:41:02.069104] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069111] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x61500000f080): datao=0, datal=4096, cccid=7 00:20:53.664 [2024-11-17 01:41:02.069118] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001bb80) on tqpair(0x61500000f080): expected_datao=0, payload_size=4096 00:20:53.664 [2024-11-17 01:41:02.069128] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069140] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069146] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:53.664 [2024-11-17 01:41:02.069155] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.664 [2024-11-17 01:41:02.069164] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.664 [2024-11-17 01:41:02.069170] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.665 [2024-11-17 01:41:02.069177] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b880) on tqpair=0x61500000f080 00:20:53.665 [2024-11-17 01:41:02.069223] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.665 [2024-11-17 01:41:02.069234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.665 [2024-11-17 01:41:02.069240] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.665 [2024-11-17 01:41:02.069252] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b700) on tqpair=0x61500000f080 00:20:53.665 [2024-11-17 01:41:02.069269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.665 [2024-11-17 01:41:02.069279] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.665 [2024-11-17 01:41:02.069285] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.665 [2024-11-17 01:41:02.069292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001ba00) on tqpair=0x61500000f080 00:20:53.665 [2024-11-17 01:41:02.069305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.665 [2024-11-17 01:41:02.069315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.665 [2024-11-17 01:41:02.069320] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.665 [2024-11-17 01:41:02.069327] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:53.665 ===================================================== 00:20:53.665 NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.665 ===================================================== 00:20:53.665 Controller Capabilities/Features 00:20:53.665 ================================ 00:20:53.665 Vendor ID: 8086 00:20:53.665 Subsystem Vendor ID: 8086 00:20:53.665 Serial Number: SPDK00000000000001 00:20:53.665 Model Number: SPDK bdev Controller 00:20:53.665 Firmware Version: 25.01 00:20:53.665 Recommended Arb Burst: 6 00:20:53.665 IEEE OUI Identifier: e4 d2 5c 00:20:53.665 Multi-path I/O 00:20:53.665 May have multiple subsystem ports: Yes 00:20:53.665 May have multiple controllers: Yes 00:20:53.665 Associated with SR-IOV VF: No 00:20:53.665 Max Data Transfer Size: 131072 00:20:53.665 Max Number of Namespaces: 32 00:20:53.665 Max Number of I/O Queues: 127 00:20:53.665 NVMe Specification Version (VS): 1.3 00:20:53.665 NVMe Specification Version (Identify): 1.3 00:20:53.665 Maximum Queue Entries: 128 00:20:53.665 Contiguous Queues Required: Yes 00:20:53.665 Arbitration Mechanisms Supported 00:20:53.665 Weighted Round Robin: Not Supported 00:20:53.665 Vendor Specific: Not Supported 00:20:53.665 Reset Timeout: 15000 ms 00:20:53.665 Doorbell Stride: 4 bytes 00:20:53.665 NVM Subsystem Reset: Not Supported 00:20:53.665 Command Sets Supported 00:20:53.665 NVM Command Set: Supported 00:20:53.665 Boot Partition: Not Supported 00:20:53.665 Memory Page Size Minimum: 4096 bytes 00:20:53.665 Memory Page Size Maximum: 4096 bytes 00:20:53.665 Persistent Memory Region: Not Supported 00:20:53.665 Optional Asynchronous Events Supported 00:20:53.665 Namespace Attribute Notices: Supported 00:20:53.665 Firmware Activation Notices: Not Supported 00:20:53.665 ANA Change Notices: Not Supported 00:20:53.665 PLE Aggregate Log Change Notices: Not Supported 00:20:53.665 LBA Status Info Alert Notices: Not Supported 00:20:53.665 EGE Aggregate Log Change Notices: Not Supported 00:20:53.665 Normal NVM Subsystem Shutdown event: Not Supported 00:20:53.665 Zone Descriptor Change Notices: Not Supported 00:20:53.665 Discovery Log Change Notices: Not Supported 00:20:53.665 Controller Attributes 00:20:53.665 128-bit Host Identifier: Supported 00:20:53.665 Non-Operational Permissive Mode: Not Supported 00:20:53.665 NVM Sets: Not Supported 00:20:53.665 Read Recovery Levels: Not Supported 00:20:53.665 Endurance Groups: Not Supported 00:20:53.665 Predictable Latency Mode: Not Supported 00:20:53.665 Traffic Based Keep ALive: Not Supported 00:20:53.665 Namespace Granularity: Not Supported 00:20:53.665 SQ Associations: Not Supported 00:20:53.665 UUID List: Not Supported 00:20:53.665 Multi-Domain Subsystem: Not Supported 00:20:53.665 Fixed Capacity Management: Not Supported 00:20:53.665 Variable Capacity Management: Not Supported 00:20:53.665 Delete Endurance Group: Not Supported 00:20:53.665 Delete NVM Set: Not Supported 00:20:53.665 Extended LBA Formats Supported: Not Supported 00:20:53.665 Flexible Data Placement Supported: Not Supported 00:20:53.665 00:20:53.665 Controller Memory Buffer Support 00:20:53.665 ================================ 00:20:53.665 Supported: No 00:20:53.665 00:20:53.665 Persistent Memory Region Support 00:20:53.665 ================================ 00:20:53.665 Supported: No 00:20:53.665 00:20:53.665 Admin Command Set Attributes 00:20:53.665 ============================ 00:20:53.665 Security Send/Receive: Not Supported 00:20:53.665 Format NVM: Not Supported 00:20:53.665 Firmware Activate/Download: Not Supported 00:20:53.665 Namespace Management: Not Supported 00:20:53.665 Device Self-Test: Not Supported 00:20:53.665 Directives: Not Supported 00:20:53.665 NVMe-MI: Not Supported 00:20:53.665 Virtualization Management: Not Supported 00:20:53.665 Doorbell Buffer Config: Not Supported 00:20:53.665 Get LBA Status Capability: Not Supported 00:20:53.665 Command & Feature Lockdown Capability: Not Supported 00:20:53.665 Abort Command Limit: 4 00:20:53.665 Async Event Request Limit: 4 00:20:53.665 Number of Firmware Slots: N/A 00:20:53.665 Firmware Slot 1 Read-Only: N/A 00:20:53.665 Firmware Activation Without Reset: N/A 00:20:53.665 Multiple Update Detection Support: N/A 00:20:53.665 Firmware Update Granularity: No Information Provided 00:20:53.665 Per-Namespace SMART Log: No 00:20:53.665 Asymmetric Namespace Access Log Page: Not Supported 00:20:53.665 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:53.665 Command Effects Log Page: Supported 00:20:53.665 Get Log Page Extended Data: Supported 00:20:53.665 Telemetry Log Pages: Not Supported 00:20:53.665 Persistent Event Log Pages: Not Supported 00:20:53.665 Supported Log Pages Log Page: May Support 00:20:53.665 Commands Supported & Effects Log Page: Not Supported 00:20:53.665 Feature Identifiers & Effects Log Page:May Support 00:20:53.665 NVMe-MI Commands & Effects Log Page: May Support 00:20:53.665 Data Area 4 for Telemetry Log: Not Supported 00:20:53.665 Error Log Page Entries Supported: 128 00:20:53.665 Keep Alive: Supported 00:20:53.665 Keep Alive Granularity: 10000 ms 00:20:53.665 00:20:53.665 NVM Command Set Attributes 00:20:53.665 ========================== 00:20:53.665 Submission Queue Entry Size 00:20:53.665 Max: 64 00:20:53.665 Min: 64 00:20:53.665 Completion Queue Entry Size 00:20:53.665 Max: 16 00:20:53.665 Min: 16 00:20:53.665 Number of Namespaces: 32 00:20:53.665 Compare Command: Supported 00:20:53.666 Write Uncorrectable Command: Not Supported 00:20:53.666 Dataset Management Command: Supported 00:20:53.666 Write Zeroes Command: Supported 00:20:53.666 Set Features Save Field: Not Supported 00:20:53.666 Reservations: Supported 00:20:53.666 Timestamp: Not Supported 00:20:53.666 Copy: Supported 00:20:53.666 Volatile Write Cache: Present 00:20:53.666 Atomic Write Unit (Normal): 1 00:20:53.666 Atomic Write Unit (PFail): 1 00:20:53.666 Atomic Compare & Write Unit: 1 00:20:53.666 Fused Compare & Write: Supported 00:20:53.666 Scatter-Gather List 00:20:53.666 SGL Command Set: Supported 00:20:53.666 SGL Keyed: Supported 00:20:53.666 SGL Bit Bucket Descriptor: Not Supported 00:20:53.666 SGL Metadata Pointer: Not Supported 00:20:53.666 Oversized SGL: Not Supported 00:20:53.666 SGL Metadata Address: Not Supported 00:20:53.666 SGL Offset: Supported 00:20:53.666 Transport SGL Data Block: Not Supported 00:20:53.666 Replay Protected Memory Block: Not Supported 00:20:53.666 00:20:53.666 Firmware Slot Information 00:20:53.666 ========================= 00:20:53.666 Active slot: 1 00:20:53.666 Slot 1 Firmware Revision: 25.01 00:20:53.666 00:20:53.666 00:20:53.666 Commands Supported and Effects 00:20:53.666 ============================== 00:20:53.666 Admin Commands 00:20:53.666 -------------- 00:20:53.666 Get Log Page (02h): Supported 00:20:53.666 Identify (06h): Supported 00:20:53.666 Abort (08h): Supported 00:20:53.666 Set Features (09h): Supported 00:20:53.666 Get Features (0Ah): Supported 00:20:53.666 Asynchronous Event Request (0Ch): Supported 00:20:53.666 Keep Alive (18h): Supported 00:20:53.666 I/O Commands 00:20:53.666 ------------ 00:20:53.666 Flush (00h): Supported LBA-Change 00:20:53.666 Write (01h): Supported LBA-Change 00:20:53.666 Read (02h): Supported 00:20:53.666 Compare (05h): Supported 00:20:53.666 Write Zeroes (08h): Supported LBA-Change 00:20:53.666 Dataset Management (09h): Supported LBA-Change 00:20:53.666 Copy (19h): Supported LBA-Change 00:20:53.666 00:20:53.666 Error Log 00:20:53.666 ========= 00:20:53.666 00:20:53.666 Arbitration 00:20:53.666 =========== 00:20:53.666 Arbitration Burst: 1 00:20:53.666 00:20:53.666 Power Management 00:20:53.666 ================ 00:20:53.666 Number of Power States: 1 00:20:53.666 Current Power State: Power State #0 00:20:53.666 Power State #0: 00:20:53.666 Max Power: 0.00 W 00:20:53.666 Non-Operational State: Operational 00:20:53.666 Entry Latency: Not Reported 00:20:53.666 Exit Latency: Not Reported 00:20:53.666 Relative Read Throughput: 0 00:20:53.666 Relative Read Latency: 0 00:20:53.666 Relative Write Throughput: 0 00:20:53.666 Relative Write Latency: 0 00:20:53.666 Idle Power: Not Reported 00:20:53.666 Active Power: Not Reported 00:20:53.666 Non-Operational Permissive Mode: Not Supported 00:20:53.666 00:20:53.666 Health Information 00:20:53.666 ================== 00:20:53.666 Critical Warnings: 00:20:53.666 Available Spare Space: OK 00:20:53.666 Temperature: OK 00:20:53.666 Device Reliability: OK 00:20:53.666 Read Only: No 00:20:53.666 Volatile Memory Backup: OK 00:20:53.666 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:53.666 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:53.666 Available Spare: 0% 00:20:53.666 Available Spare Threshold: 0% 00:20:53.666 Life Percentage Used:[2024-11-17 01:41:02.069495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.666 [2024-11-17 01:41:02.069513] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x61500000f080) 00:20:53.666 [2024-11-17 01:41:02.069530] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.666 [2024-11-17 01:41:02.069574] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001bb80, cid 7, qid 0 00:20:53.666 [2024-11-17 01:41:02.069949] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.666 [2024-11-17 01:41:02.069973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.666 [2024-11-17 01:41:02.069999] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.666 [2024-11-17 01:41:02.070007] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001bb80) on tqpair=0x61500000f080 00:20:53.666 [2024-11-17 01:41:02.070099] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:20:53.666 [2024-11-17 01:41:02.070132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x61500000f080 00:20:53.666 [2024-11-17 01:41:02.070146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.666 [2024-11-17 01:41:02.070156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b280) on tqpair=0x61500000f080 00:20:53.666 [2024-11-17 01:41:02.070180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.666 [2024-11-17 01:41:02.070188] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b400) on tqpair=0x61500000f080 00:20:53.666 [2024-11-17 01:41:02.070196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.666 [2024-11-17 01:41:02.070208] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.666 [2024-11-17 01:41:02.070219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:53.666 [2024-11-17 01:41:02.070234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.070242] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.070250] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.070264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.070305] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.070549] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.070573] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.070581] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.070605] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.070620] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.070633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.070642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.070656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.070696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.071135] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.071157] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.071181] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.071192] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.071203] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:20:53.667 [2024-11-17 01:41:02.071212] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:20:53.667 [2024-11-17 01:41:02.071231] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.071240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.071247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.071261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.071291] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.071554] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.071583] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.071591] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.071598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.071646] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.071657] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.071664] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.071678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.071708] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.072091] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.072114] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.072122] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.072129] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.072147] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.072156] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.072181] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.072195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.072223] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.072516] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.072537] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.072545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.072552] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.072574] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.072583] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.072589] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.072605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.072633] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.074848] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.074878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.074907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.074916] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.074939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.074948] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.074955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x61500000f080) 00:20:53.667 [2024-11-17 01:41:02.074969] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:53.667 [2024-11-17 01:41:02.075018] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b580, cid 3, qid 0 00:20:53.667 [2024-11-17 01:41:02.075092] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:53.667 [2024-11-17 01:41:02.075122] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:53.667 [2024-11-17 01:41:02.075128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:53.667 [2024-11-17 01:41:02.075135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x62600001b580) on tqpair=0x61500000f080 00:20:53.667 [2024-11-17 01:41:02.075150] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 3 milliseconds 00:20:53.927 0% 00:20:53.927 Data Units Read: 0 00:20:53.927 Data Units Written: 0 00:20:53.927 Host Read Commands: 0 00:20:53.927 Host Write Commands: 0 00:20:53.927 Controller Busy Time: 0 minutes 00:20:53.927 Power Cycles: 0 00:20:53.927 Power On Hours: 0 hours 00:20:53.927 Unsafe Shutdowns: 0 00:20:53.927 Unrecoverable Media Errors: 0 00:20:53.927 Lifetime Error Log Entries: 0 00:20:53.927 Warning Temperature Time: 0 minutes 00:20:53.927 Critical Temperature Time: 0 minutes 00:20:53.927 00:20:53.927 Number of Queues 00:20:53.927 ================ 00:20:53.927 Number of I/O Submission Queues: 127 00:20:53.927 Number of I/O Completion Queues: 127 00:20:53.927 00:20:53.927 Active Namespaces 00:20:53.927 ================= 00:20:53.927 Namespace ID:1 00:20:53.927 Error Recovery Timeout: Unlimited 00:20:53.927 Command Set Identifier: NVM (00h) 00:20:53.927 Deallocate: Supported 00:20:53.927 Deallocated/Unwritten Error: Not Supported 00:20:53.927 Deallocated Read Value: Unknown 00:20:53.927 Deallocate in Write Zeroes: Not Supported 00:20:53.927 Deallocated Guard Field: 0xFFFF 00:20:53.927 Flush: Supported 00:20:53.927 Reservation: Supported 00:20:53.927 Namespace Sharing Capabilities: Multiple Controllers 00:20:53.927 Size (in LBAs): 131072 (0GiB) 00:20:53.927 Capacity (in LBAs): 131072 (0GiB) 00:20:53.927 Utilization (in LBAs): 131072 (0GiB) 00:20:53.927 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:53.927 EUI64: ABCDEF0123456789 00:20:53.927 UUID: 2fe4e366-ea53-441f-9aa2-decb7ef40ed7 00:20:53.927 Thin Provisioning: Not Supported 00:20:53.927 Per-NS Atomic Units: Yes 00:20:53.927 Atomic Boundary Size (Normal): 0 00:20:53.927 Atomic Boundary Size (PFail): 0 00:20:53.927 Atomic Boundary Offset: 0 00:20:53.927 Maximum Single Source Range Length: 65535 00:20:53.927 Maximum Copy Length: 65535 00:20:53.927 Maximum Source Range Count: 1 00:20:53.927 NGUID/EUI64 Never Reused: No 00:20:53.927 Namespace Write Protected: No 00:20:53.927 Number of LBA Formats: 1 00:20:53.927 Current LBA Format: LBA Format #00 00:20:53.927 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:53.927 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:20:53.927 rmmod nvme_tcp 00:20:53.927 rmmod nvme_fabrics 00:20:53.927 rmmod nvme_keyring 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 79402 ']' 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 79402 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 79402 ']' 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 79402 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79402 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.927 killing process with pid 79402 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79402' 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 79402 00:20:53.927 01:41:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 79402 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:20:54.865 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@246 -- # remove_spdk_ns 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@300 -- # return 0 00:20:55.124 ************************************ 00:20:55.124 END TEST nvmf_identify 00:20:55.124 ************************************ 00:20:55.124 00:20:55.124 real 0m4.073s 00:20:55.124 user 0m10.764s 00:20:55.124 sys 0m0.969s 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.124 01:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:55.385 ************************************ 00:20:55.385 START TEST nvmf_perf 00:20:55.385 ************************************ 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:55.385 * Looking for test storage... 00:20:55.385 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.385 --rc genhtml_branch_coverage=1 00:20:55.385 --rc genhtml_function_coverage=1 00:20:55.385 --rc genhtml_legend=1 00:20:55.385 --rc geninfo_all_blocks=1 00:20:55.385 --rc geninfo_unexecuted_blocks=1 00:20:55.385 00:20:55.385 ' 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.385 --rc genhtml_branch_coverage=1 00:20:55.385 --rc genhtml_function_coverage=1 00:20:55.385 --rc genhtml_legend=1 00:20:55.385 --rc geninfo_all_blocks=1 00:20:55.385 --rc geninfo_unexecuted_blocks=1 00:20:55.385 00:20:55.385 ' 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.385 --rc genhtml_branch_coverage=1 00:20:55.385 --rc genhtml_function_coverage=1 00:20:55.385 --rc genhtml_legend=1 00:20:55.385 --rc geninfo_all_blocks=1 00:20:55.385 --rc geninfo_unexecuted_blocks=1 00:20:55.385 00:20:55.385 ' 00:20:55.385 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:55.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:55.385 --rc genhtml_branch_coverage=1 00:20:55.385 --rc genhtml_function_coverage=1 00:20:55.385 --rc genhtml_legend=1 00:20:55.385 --rc geninfo_all_blocks=1 00:20:55.385 --rc geninfo_unexecuted_blocks=1 00:20:55.385 00:20:55.386 ' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:55.386 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@460 -- # nvmf_veth_init 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:55.386 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:20:55.387 Cannot find device "nvmf_init_br" 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # true 00:20:55.387 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:20:55.646 Cannot find device "nvmf_init_br2" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:20:55.646 Cannot find device "nvmf_tgt_br" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@164 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:20:55.646 Cannot find device "nvmf_tgt_br2" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@165 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:20:55.646 Cannot find device "nvmf_init_br" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@166 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:20:55.646 Cannot find device "nvmf_init_br2" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@167 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:20:55.646 Cannot find device "nvmf_tgt_br" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@168 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:20:55.646 Cannot find device "nvmf_tgt_br2" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:20:55.646 Cannot find device "nvmf_br" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@170 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:20:55.646 Cannot find device "nvmf_init_if" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:20:55.646 Cannot find device "nvmf_init_if2" 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # true 00:20:55.646 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:55.646 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@173 -- # true 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:55.647 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@174 -- # true 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:55.647 01:41:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:20:55.647 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:20:55.906 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:20:55.906 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:20:55.906 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:20:55.907 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:55.907 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.077 ms 00:20:55.907 00:20:55.907 --- 10.0.0.3 ping statistics --- 00:20:55.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.907 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:20:55.907 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:20:55.907 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.054 ms 00:20:55.907 00:20:55.907 --- 10.0.0.4 ping statistics --- 00:20:55.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.907 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:55.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:20:55.907 00:20:55.907 --- 10.0.0.1 ping statistics --- 00:20:55.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.907 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:20:55.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.055 ms 00:20:55.907 00:20:55.907 --- 10.0.0.2 ping statistics --- 00:20:55.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.907 rtt min/avg/max/mdev = 0.055/0.055/0.055/0.000 ms 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@461 -- # return 0 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=79672 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 79672 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 79672 ']' 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.907 01:41:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:56.166 [2024-11-17 01:41:04.382902] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:20:56.166 [2024-11-17 01:41:04.383055] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:56.166 [2024-11-17 01:41:04.563342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:56.425 [2024-11-17 01:41:04.651944] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:56.425 [2024-11-17 01:41:04.652045] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:56.425 [2024-11-17 01:41:04.652072] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:56.425 [2024-11-17 01:41:04.652083] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:56.425 [2024-11-17 01:41:04.652096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:56.425 [2024-11-17 01:41:04.653840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:56.425 [2024-11-17 01:41:04.653957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:56.425 [2024-11-17 01:41:04.654029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:56.425 [2024-11-17 01:41:04.654130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:56.425 [2024-11-17 01:41:04.815159] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:20:56.994 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:20:57.563 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:20:57.563 01:41:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:57.563 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:20:57.563 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:58.132 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:58.132 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:20:58.132 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:58.132 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:58.132 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:58.391 [2024-11-17 01:41:06.631916] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:58.391 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:58.649 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:58.649 01:41:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:58.908 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:58.908 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:59.167 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:20:59.167 [2024-11-17 01:41:07.608357] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:20:59.426 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:20:59.426 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:59.426 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:20:59.426 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:59.426 01:41:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:21:00.803 Initializing NVMe Controllers 00:21:00.803 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:21:00.803 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:21:00.803 Initialization complete. Launching workers. 00:21:00.803 ======================================================== 00:21:00.803 Latency(us) 00:21:00.803 Device Information : IOPS MiB/s Average min max 00:21:00.803 PCIE (0000:00:10.0) NSID 1 from core 0: 21865.71 85.41 1463.72 355.32 9137.45 00:21:00.803 ======================================================== 00:21:00.803 Total : 21865.71 85.41 1463.72 355.32 9137.45 00:21:00.803 00:21:00.803 01:41:09 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:02.178 Initializing NVMe Controllers 00:21:02.178 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.178 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.178 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:02.178 Initialization complete. Launching workers. 00:21:02.178 ======================================================== 00:21:02.178 Latency(us) 00:21:02.178 Device Information : IOPS MiB/s Average min max 00:21:02.178 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3002.02 11.73 332.50 132.09 4311.23 00:21:02.178 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.51 0.48 8160.54 7915.78 12001.67 00:21:02.178 ======================================================== 00:21:02.178 Total : 3125.53 12.21 641.83 132.09 12001.67 00:21:02.178 00:21:02.178 01:41:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:03.553 Initializing NVMe Controllers 00:21:03.553 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:03.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:03.553 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:03.553 Initialization complete. Launching workers. 00:21:03.553 ======================================================== 00:21:03.553 Latency(us) 00:21:03.553 Device Information : IOPS MiB/s Average min max 00:21:03.553 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8001.95 31.26 4001.26 610.07 10768.07 00:21:03.554 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3968.98 15.50 8140.37 6181.22 15588.15 00:21:03.554 ======================================================== 00:21:03.554 Total : 11970.93 46.76 5373.59 610.07 15588.15 00:21:03.554 00:21:03.811 01:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:03.811 01:41:12 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:07.100 Initializing NVMe Controllers 00:21:07.100 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.100 Controller IO queue size 128, less than required. 00:21:07.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:07.100 Controller IO queue size 128, less than required. 00:21:07.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:07.100 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.100 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:07.100 Initialization complete. Launching workers. 00:21:07.100 ======================================================== 00:21:07.100 Latency(us) 00:21:07.100 Device Information : IOPS MiB/s Average min max 00:21:07.100 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1687.20 421.80 77637.36 42178.73 222950.25 00:21:07.100 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 612.98 153.25 223781.53 88514.63 443897.76 00:21:07.100 ======================================================== 00:21:07.100 Total : 2300.18 575.04 116583.77 42178.73 443897.76 00:21:07.100 00:21:07.100 01:41:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' -c 0xf -P 4 00:21:07.100 Initializing NVMe Controllers 00:21:07.100 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.100 Controller IO queue size 128, less than required. 00:21:07.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:07.100 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:07.100 Controller IO queue size 128, less than required. 00:21:07.100 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:07.100 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:21:07.100 WARNING: Some requested NVMe devices were skipped 00:21:07.100 No valid NVMe controllers or AIO or URING devices found 00:21:07.100 01:41:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' --transport-stat 00:21:09.635 Initializing NVMe Controllers 00:21:09.636 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:09.636 Controller IO queue size 128, less than required. 00:21:09.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:09.636 Controller IO queue size 128, less than required. 00:21:09.636 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:09.636 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:09.636 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:09.636 Initialization complete. Launching workers. 00:21:09.636 00:21:09.636 ==================== 00:21:09.636 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:09.636 TCP transport: 00:21:09.636 polls: 7265 00:21:09.636 idle_polls: 4023 00:21:09.636 sock_completions: 3242 00:21:09.636 nvme_completions: 5695 00:21:09.636 submitted_requests: 8498 00:21:09.636 queued_requests: 1 00:21:09.636 00:21:09.636 ==================== 00:21:09.636 lcore 0, ns TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:09.636 TCP transport: 00:21:09.636 polls: 7948 00:21:09.636 idle_polls: 4500 00:21:09.636 sock_completions: 3448 00:21:09.636 nvme_completions: 6033 00:21:09.636 submitted_requests: 9096 00:21:09.636 queued_requests: 1 00:21:09.636 ======================================================== 00:21:09.636 Latency(us) 00:21:09.636 Device Information : IOPS MiB/s Average min max 00:21:09.636 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1423.48 355.87 92712.32 44793.77 229127.51 00:21:09.636 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1507.98 376.99 87983.77 45864.75 342536.32 00:21:09.636 ======================================================== 00:21:09.636 Total : 2931.46 732.86 90279.89 44793.77 342536.32 00:21:09.636 00:21:09.894 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:21:09.894 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.154 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:21:10.154 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:00:10.0 ']' 00:21:10.154 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=0c3651c5-ee06-44f4-89d6-da2c180ae279 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb 0c3651c5-ee06-44f4-89d6-da2c180ae279 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=0c3651c5-ee06-44f4-89d6-da2c180ae279 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:21:10.413 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:10.671 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:10.671 { 00:21:10.671 "uuid": "0c3651c5-ee06-44f4-89d6-da2c180ae279", 00:21:10.671 "name": "lvs_0", 00:21:10.671 "base_bdev": "Nvme0n1", 00:21:10.671 "total_data_clusters": 1278, 00:21:10.671 "free_clusters": 1278, 00:21:10.671 "block_size": 4096, 00:21:10.671 "cluster_size": 4194304 00:21:10.671 } 00:21:10.671 ]' 00:21:10.671 01:41:18 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="0c3651c5-ee06-44f4-89d6-da2c180ae279") .free_clusters' 00:21:10.671 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1278 00:21:10.671 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="0c3651c5-ee06-44f4-89d6-da2c180ae279") .cluster_size' 00:21:10.671 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:10.671 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5112 00:21:10.671 5112 00:21:10.671 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5112 00:21:10.671 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 5112 -gt 20480 ']' 00:21:10.672 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 0c3651c5-ee06-44f4-89d6-da2c180ae279 lbd_0 5112 00:21:10.930 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=2a73708f-905b-481c-a337-818fbb347080 00:21:10.930 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore 2a73708f-905b-481c-a337-818fbb347080 lvs_n_0 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=61969a90-61ea-41cb-8845-1374a1d5b6b8 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 61969a90-61ea-41cb-8845-1374a1d5b6b8 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=61969a90-61ea-41cb-8845-1374a1d5b6b8 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:11.496 { 00:21:11.496 "uuid": "0c3651c5-ee06-44f4-89d6-da2c180ae279", 00:21:11.496 "name": "lvs_0", 00:21:11.496 "base_bdev": "Nvme0n1", 00:21:11.496 "total_data_clusters": 1278, 00:21:11.496 "free_clusters": 0, 00:21:11.496 "block_size": 4096, 00:21:11.496 "cluster_size": 4194304 00:21:11.496 }, 00:21:11.496 { 00:21:11.496 "uuid": "61969a90-61ea-41cb-8845-1374a1d5b6b8", 00:21:11.496 "name": "lvs_n_0", 00:21:11.496 "base_bdev": "2a73708f-905b-481c-a337-818fbb347080", 00:21:11.496 "total_data_clusters": 1276, 00:21:11.496 "free_clusters": 1276, 00:21:11.496 "block_size": 4096, 00:21:11.496 "cluster_size": 4194304 00:21:11.496 } 00:21:11.496 ]' 00:21:11.496 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="61969a90-61ea-41cb-8845-1374a1d5b6b8") .free_clusters' 00:21:11.756 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=1276 00:21:11.756 01:41:19 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="61969a90-61ea-41cb-8845-1374a1d5b6b8") .cluster_size' 00:21:11.756 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:21:11.756 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=5104 00:21:11.756 5104 00:21:11.756 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 5104 00:21:11.756 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 5104 -gt 20480 ']' 00:21:11.756 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 61969a90-61ea-41cb-8845-1374a1d5b6b8 lbd_nest_0 5104 00:21:12.015 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=2796085e-a6c5-420f-8b34-31c1e4edce35 00:21:12.015 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:12.275 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:21:12.275 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 2796085e-a6c5-420f-8b34-31c1e4edce35 00:21:12.534 01:41:20 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:12.793 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:21:12.793 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:21:12.793 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:12.793 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:12.793 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:13.052 Initializing NVMe Controllers 00:21:13.052 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:13.052 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:13.052 WARNING: Some requested NVMe devices were skipped 00:21:13.052 No valid NVMe controllers or AIO or URING devices found 00:21:13.311 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:13.311 01:41:21 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:25.521 Initializing NVMe Controllers 00:21:25.521 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.521 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:25.521 Initialization complete. Launching workers. 00:21:25.521 ======================================================== 00:21:25.521 Latency(us) 00:21:25.521 Device Information : IOPS MiB/s Average min max 00:21:25.521 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 855.60 106.95 1168.10 383.11 8591.61 00:21:25.521 ======================================================== 00:21:25.521 Total : 855.60 106.95 1168.10 383.11 8591.61 00:21:25.521 00:21:25.521 01:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:25.521 01:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:25.521 01:41:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:25.521 Initializing NVMe Controllers 00:21:25.521 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:25.521 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:25.521 WARNING: Some requested NVMe devices were skipped 00:21:25.521 No valid NVMe controllers or AIO or URING devices found 00:21:25.521 01:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:25.521 01:41:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:35.502 Initializing NVMe Controllers 00:21:35.502 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.502 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:35.502 Initialization complete. Launching workers. 00:21:35.502 ======================================================== 00:21:35.502 Latency(us) 00:21:35.502 Device Information : IOPS MiB/s Average min max 00:21:35.502 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1333.90 166.74 24015.90 5405.19 75983.42 00:21:35.502 ======================================================== 00:21:35.502 Total : 1333.90 166.74 24015.90 5405.19 75983.42 00:21:35.502 00:21:35.502 01:41:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:21:35.502 01:41:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:35.502 01:41:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:35.502 Initializing NVMe Controllers 00:21:35.502 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:35.502 WARNING: controller SPDK bdev Controller (SPDK00000000000001 ) ns 1 has invalid ns size 5351931904 / block size 4096 for I/O size 512 00:21:35.502 WARNING: Some requested NVMe devices were skipped 00:21:35.502 No valid NVMe controllers or AIO or URING devices found 00:21:35.502 01:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:21:35.502 01:41:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:21:45.482 Initializing NVMe Controllers 00:21:45.483 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.483 Controller IO queue size 128, less than required. 00:21:45.483 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:45.483 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:45.483 Initialization complete. Launching workers. 00:21:45.483 ======================================================== 00:21:45.483 Latency(us) 00:21:45.483 Device Information : IOPS MiB/s Average min max 00:21:45.483 TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3621.39 452.67 35375.33 13603.64 88150.43 00:21:45.483 ======================================================== 00:21:45.483 Total : 3621.39 452.67 35375.33 13603.64 88150.43 00:21:45.483 00:21:45.483 01:41:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.742 01:41:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2796085e-a6c5-420f-8b34-31c1e4edce35 00:21:46.001 01:41:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:21:46.260 01:41:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 2a73708f-905b-481c-a337-818fbb347080 00:21:46.519 01:41:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:46.778 rmmod nvme_tcp 00:21:46.778 rmmod nvme_fabrics 00:21:46.778 rmmod nvme_keyring 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 79672 ']' 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 79672 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 79672 ']' 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 79672 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79672 00:21:46.778 killing process with pid 79672 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79672' 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 79672 00:21:46.778 01:41:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 79672 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@246 -- # remove_spdk_ns 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@300 -- # return 0 00:21:49.314 00:21:49.314 real 0m54.025s 00:21:49.314 user 3m23.697s 00:21:49.314 sys 0m11.800s 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:49.314 ************************************ 00:21:49.314 END TEST nvmf_perf 00:21:49.314 ************************************ 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.314 ************************************ 00:21:49.314 START TEST nvmf_fio_host 00:21:49.314 ************************************ 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:49.314 * Looking for test storage... 00:21:49.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:49.314 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:49.584 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.585 --rc genhtml_branch_coverage=1 00:21:49.585 --rc genhtml_function_coverage=1 00:21:49.585 --rc genhtml_legend=1 00:21:49.585 --rc geninfo_all_blocks=1 00:21:49.585 --rc geninfo_unexecuted_blocks=1 00:21:49.585 00:21:49.585 ' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.585 --rc genhtml_branch_coverage=1 00:21:49.585 --rc genhtml_function_coverage=1 00:21:49.585 --rc genhtml_legend=1 00:21:49.585 --rc geninfo_all_blocks=1 00:21:49.585 --rc geninfo_unexecuted_blocks=1 00:21:49.585 00:21:49.585 ' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.585 --rc genhtml_branch_coverage=1 00:21:49.585 --rc genhtml_function_coverage=1 00:21:49.585 --rc genhtml_legend=1 00:21:49.585 --rc geninfo_all_blocks=1 00:21:49.585 --rc geninfo_unexecuted_blocks=1 00:21:49.585 00:21:49.585 ' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:49.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:49.585 --rc genhtml_branch_coverage=1 00:21:49.585 --rc genhtml_function_coverage=1 00:21:49.585 --rc genhtml_legend=1 00:21:49.585 --rc geninfo_all_blocks=1 00:21:49.585 --rc geninfo_unexecuted_blocks=1 00:21:49.585 00:21:49.585 ' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:49.585 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:49.585 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:21:49.586 Cannot find device "nvmf_init_br" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:21:49.586 Cannot find device "nvmf_init_br2" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:21:49.586 Cannot find device "nvmf_tgt_br" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@164 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:21:49.586 Cannot find device "nvmf_tgt_br2" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@165 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:21:49.586 Cannot find device "nvmf_init_br" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@166 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:21:49.586 Cannot find device "nvmf_init_br2" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@167 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:21:49.586 Cannot find device "nvmf_tgt_br" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@168 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:21:49.586 Cannot find device "nvmf_tgt_br2" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:21:49.586 Cannot find device "nvmf_br" 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@170 -- # true 00:21:49.586 01:41:57 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:21:49.586 Cannot find device "nvmf_init_if" 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # true 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:21:49.586 Cannot find device "nvmf_init_if2" 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # true 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:49.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@173 -- # true 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:49.586 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@174 -- # true 00:21:49.586 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:21:49.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:49.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.115 ms 00:21:49.939 00:21:49.939 --- 10.0.0.3 ping statistics --- 00:21:49.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.939 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:21:49.939 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:21:49.939 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.049 ms 00:21:49.939 00:21:49.939 --- 10.0.0.4 ping statistics --- 00:21:49.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.939 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:49.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:49.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:21:49.939 00:21:49.939 --- 10.0.0.1 ping statistics --- 00:21:49.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.939 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:21:49.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:49.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:21:49.939 00:21:49.939 --- 10.0.0.2 ping statistics --- 00:21:49.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:49.939 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@461 -- # return 0 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=80572 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 80572 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 80572 ']' 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.939 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.940 01:41:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.255 [2024-11-17 01:41:58.456597] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:21:50.255 [2024-11-17 01:41:58.456757] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:50.255 [2024-11-17 01:41:58.633889] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:50.513 [2024-11-17 01:41:58.727768] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:50.514 [2024-11-17 01:41:58.727854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:50.514 [2024-11-17 01:41:58.727889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:50.514 [2024-11-17 01:41:58.727901] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:50.514 [2024-11-17 01:41:58.727912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:50.514 [2024-11-17 01:41:58.729590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:50.514 [2024-11-17 01:41:58.729740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.514 [2024-11-17 01:41:58.729913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:50.514 [2024-11-17 01:41:58.729954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:50.514 [2024-11-17 01:41:58.888045] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:21:51.080 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.080 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:21:51.080 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:51.339 [2024-11-17 01:41:59.649924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:51.339 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:21:51.339 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:51.339 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:51.339 01:41:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:51.597 Malloc1 00:21:51.597 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:51.856 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:52.114 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:21:52.373 [2024-11-17 01:42:00.714727] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:21:52.373 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.3 -s 4420 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:52.631 01:42:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:21:52.890 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:52.890 fio-3.35 00:21:52.890 Starting 1 thread 00:21:55.421 00:21:55.421 test: (groupid=0, jobs=1): err= 0: pid=80642: Sun Nov 17 01:42:03 2024 00:21:55.421 read: IOPS=7264, BW=28.4MiB/s (29.8MB/s)(57.0MiB/2008msec) 00:21:55.421 slat (usec): min=2, max=145, avg= 3.15, stdev= 2.40 00:21:55.421 clat (usec): min=1976, max=16106, avg=9159.67, stdev=694.31 00:21:55.421 lat (usec): min=2012, max=16109, avg=9162.82, stdev=694.23 00:21:55.421 clat percentiles (usec): 00:21:55.421 | 1.00th=[ 7832], 5.00th=[ 8225], 10.00th=[ 8455], 20.00th=[ 8586], 00:21:55.421 | 30.00th=[ 8848], 40.00th=[ 8979], 50.00th=[ 9110], 60.00th=[ 9241], 00:21:55.421 | 70.00th=[ 9503], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10290], 00:21:55.421 | 99.00th=[10945], 99.50th=[11207], 99.90th=[13566], 99.95th=[14877], 00:21:55.421 | 99.99th=[16057] 00:21:55.421 bw ( KiB/s): min=27672, max=30328, per=99.94%, avg=29042.00, stdev=1179.04, samples=4 00:21:55.421 iops : min= 6918, max= 7582, avg=7260.50, stdev=294.76, samples=4 00:21:55.421 write: IOPS=7230, BW=28.2MiB/s (29.6MB/s)(56.7MiB/2008msec); 0 zone resets 00:21:55.421 slat (usec): min=2, max=112, avg= 3.32, stdev= 2.19 00:21:55.421 clat (usec): min=1224, max=15017, avg=8382.43, stdev=656.10 00:21:55.421 lat (usec): min=1232, max=15020, avg=8385.75, stdev=656.10 00:21:55.421 clat percentiles (usec): 00:21:55.421 | 1.00th=[ 7111], 5.00th=[ 7504], 10.00th=[ 7701], 20.00th=[ 7898], 00:21:55.421 | 30.00th=[ 8094], 40.00th=[ 8225], 50.00th=[ 8356], 60.00th=[ 8455], 00:21:55.421 | 70.00th=[ 8586], 80.00th=[ 8848], 90.00th=[ 9110], 95.00th=[ 9372], 00:21:55.421 | 99.00th=[10028], 99.50th=[10421], 99.90th=[13435], 99.95th=[13698], 00:21:55.421 | 99.99th=[15008] 00:21:55.421 bw ( KiB/s): min=27888, max=29680, per=99.99%, avg=28918.00, stdev=833.88, samples=4 00:21:55.421 iops : min= 6972, max= 7420, avg=7229.50, stdev=208.47, samples=4 00:21:55.421 lat (msec) : 2=0.01%, 4=0.11%, 10=94.82%, 20=5.06% 00:21:55.421 cpu : usr=70.35%, sys=22.42%, ctx=26, majf=0, minf=1554 00:21:55.421 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:21:55.421 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.421 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:55.421 issued rwts: total=14588,14519,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.421 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:55.421 00:21:55.421 Run status group 0 (all jobs): 00:21:55.421 READ: bw=28.4MiB/s (29.8MB/s), 28.4MiB/s-28.4MiB/s (29.8MB/s-29.8MB/s), io=57.0MiB (59.8MB), run=2008-2008msec 00:21:55.421 WRITE: bw=28.2MiB/s (29.6MB/s), 28.2MiB/s-28.2MiB/s (29.6MB/s-29.6MB/s), io=56.7MiB (59.5MB), run=2008-2008msec 00:21:55.421 ----------------------------------------------------- 00:21:55.421 Suppressions used: 00:21:55.421 count bytes template 00:21:55.421 1 57 /usr/src/fio/parse.c 00:21:55.421 1 8 libtcmalloc_minimal.so 00:21:55.421 ----------------------------------------------------- 00:21:55.421 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:21:55.421 01:42:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' 00:21:55.680 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:55.680 fio-3.35 00:21:55.680 Starting 1 thread 00:21:58.213 00:21:58.213 test: (groupid=0, jobs=1): err= 0: pid=80688: Sun Nov 17 01:42:06 2024 00:21:58.213 read: IOPS=7128, BW=111MiB/s (117MB/s)(224MiB/2008msec) 00:21:58.213 slat (usec): min=3, max=141, avg= 4.43, stdev= 2.42 00:21:58.213 clat (usec): min=2395, max=20379, avg=10054.94, stdev=2921.64 00:21:58.213 lat (usec): min=2399, max=20383, avg=10059.38, stdev=2921.69 00:21:58.213 clat percentiles (usec): 00:21:58.213 | 1.00th=[ 4817], 5.00th=[ 5669], 10.00th=[ 6325], 20.00th=[ 7504], 00:21:58.213 | 30.00th=[ 8455], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:21:58.213 | 70.00th=[11469], 80.00th=[12256], 90.00th=[13960], 95.00th=[15533], 00:21:58.213 | 99.00th=[17695], 99.50th=[18744], 99.90th=[20055], 99.95th=[20317], 00:21:58.213 | 99.99th=[20317] 00:21:58.213 bw ( KiB/s): min=48960, max=63584, per=50.59%, avg=57704.00, stdev=6598.85, samples=4 00:21:58.213 iops : min= 3060, max= 3974, avg=3606.50, stdev=412.43, samples=4 00:21:58.213 write: IOPS=4135, BW=64.6MiB/s (67.8MB/s)(118MiB/1828msec); 0 zone resets 00:21:58.213 slat (usec): min=33, max=257, avg=39.48, stdev= 8.15 00:21:58.213 clat (usec): min=7134, max=25287, avg=14057.86, stdev=2724.92 00:21:58.213 lat (usec): min=7168, max=25323, avg=14097.34, stdev=2726.41 00:21:58.213 clat percentiles (usec): 00:21:58.213 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10945], 20.00th=[11731], 00:21:58.213 | 30.00th=[12387], 40.00th=[13042], 50.00th=[13698], 60.00th=[14484], 00:21:58.213 | 70.00th=[15401], 80.00th=[16450], 90.00th=[17695], 95.00th=[18744], 00:21:58.213 | 99.00th=[21103], 99.50th=[22152], 99.90th=[24249], 99.95th=[24511], 00:21:58.213 | 99.99th=[25297] 00:21:58.213 bw ( KiB/s): min=51360, max=65952, per=90.41%, avg=59824.00, stdev=6643.30, samples=4 00:21:58.213 iops : min= 3210, max= 4122, avg=3739.00, stdev=415.21, samples=4 00:21:58.213 lat (msec) : 4=0.14%, 10=36.30%, 20=62.81%, 50=0.75% 00:21:58.213 cpu : usr=81.96%, sys=13.65%, ctx=3, majf=0, minf=2203 00:21:58.213 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:21:58.213 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.213 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.213 issued rwts: total=14315,7560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.213 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.213 00:21:58.213 Run status group 0 (all jobs): 00:21:58.213 READ: bw=111MiB/s (117MB/s), 111MiB/s-111MiB/s (117MB/s-117MB/s), io=224MiB (235MB), run=2008-2008msec 00:21:58.213 WRITE: bw=64.6MiB/s (67.8MB/s), 64.6MiB/s-64.6MiB/s (67.8MB/s-67.8MB/s), io=118MiB (124MB), run=1828-1828msec 00:21:58.213 ----------------------------------------------------- 00:21:58.213 Suppressions used: 00:21:58.213 count bytes template 00:21:58.213 1 57 /usr/src/fio/parse.c 00:21:58.213 503 48288 /usr/src/fio/iolog.c 00:21:58.213 1 8 libtcmalloc_minimal.so 00:21:58.213 ----------------------------------------------------- 00:21:58.213 00:21:58.471 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:21:58.730 01:42:06 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:21:58.730 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:21:58.730 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:21:58.730 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 -i 10.0.0.3 00:21:58.989 Nvme0n1 00:21:58.989 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=1bad4074-c17c-4ba8-86b1-353fc17aa6d1 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 1bad4074-c17c-4ba8-86b1-353fc17aa6d1 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=1bad4074-c17c-4ba8-86b1-353fc17aa6d1 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:21:59.248 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:21:59.506 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:21:59.506 { 00:21:59.506 "uuid": "1bad4074-c17c-4ba8-86b1-353fc17aa6d1", 00:21:59.506 "name": "lvs_0", 00:21:59.506 "base_bdev": "Nvme0n1", 00:21:59.506 "total_data_clusters": 4, 00:21:59.506 "free_clusters": 4, 00:21:59.506 "block_size": 4096, 00:21:59.506 "cluster_size": 1073741824 00:21:59.506 } 00:21:59.506 ]' 00:21:59.506 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="1bad4074-c17c-4ba8-86b1-353fc17aa6d1") .free_clusters' 00:21:59.765 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=4 00:21:59.765 01:42:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="1bad4074-c17c-4ba8-86b1-353fc17aa6d1") .cluster_size' 00:21:59.765 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:21:59.765 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4096 00:21:59.765 4096 00:21:59.765 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4096 00:21:59.765 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 4096 00:22:00.024 6097d4e5-d852-4cb0-82fe-f63d430eb2f5 00:22:00.024 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:22:00.283 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:22:00.541 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:22:00.541 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:00.541 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:00.541 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:00.541 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:00.542 01:42:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:00.800 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:00.800 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:00.800 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:00.800 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:00.800 01:42:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:00.800 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:00.800 fio-3.35 00:22:00.800 Starting 1 thread 00:22:03.329 00:22:03.329 test: (groupid=0, jobs=1): err= 0: pid=80796: Sun Nov 17 01:42:11 2024 00:22:03.329 read: IOPS=5016, BW=19.6MiB/s (20.5MB/s)(39.4MiB/2011msec) 00:22:03.329 slat (usec): min=2, max=181, avg= 3.48, stdev= 2.98 00:22:03.329 clat (usec): min=3699, max=21410, avg=13273.57, stdev=1264.35 00:22:03.329 lat (usec): min=3703, max=21415, avg=13277.05, stdev=1264.28 00:22:03.329 clat percentiles (usec): 00:22:03.329 | 1.00th=[10945], 5.00th=[11600], 10.00th=[11994], 20.00th=[12387], 00:22:03.329 | 30.00th=[12649], 40.00th=[12911], 50.00th=[13173], 60.00th=[13435], 00:22:03.329 | 70.00th=[13698], 80.00th=[14091], 90.00th=[14615], 95.00th=[15139], 00:22:03.329 | 99.00th=[18220], 99.50th=[19006], 99.90th=[20841], 99.95th=[21103], 00:22:03.329 | 99.99th=[21365] 00:22:03.329 bw ( KiB/s): min=19168, max=20680, per=99.95%, avg=20058.00, stdev=650.98, samples=4 00:22:03.329 iops : min= 4792, max= 5170, avg=5014.50, stdev=162.75, samples=4 00:22:03.329 write: IOPS=5013, BW=19.6MiB/s (20.5MB/s)(39.4MiB/2011msec); 0 zone resets 00:22:03.329 slat (usec): min=2, max=124, avg= 3.62, stdev= 2.61 00:22:03.329 clat (usec): min=2267, max=21651, avg=12058.45, stdev=1241.17 00:22:03.329 lat (usec): min=2293, max=21655, avg=12062.07, stdev=1241.20 00:22:03.329 clat percentiles (usec): 00:22:03.329 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[10814], 20.00th=[11207], 00:22:03.329 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:22:03.329 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13304], 95.00th=[13698], 00:22:03.329 | 99.00th=[16909], 99.50th=[18220], 99.90th=[19530], 99.95th=[20841], 00:22:03.329 | 99.99th=[21627] 00:22:03.329 bw ( KiB/s): min=19608, max=20288, per=99.88%, avg=20030.00, stdev=300.50, samples=4 00:22:03.329 iops : min= 4902, max= 5072, avg=5007.50, stdev=75.12, samples=4 00:22:03.329 lat (msec) : 4=0.06%, 10=0.87%, 20=98.93%, 50=0.14% 00:22:03.329 cpu : usr=74.13%, sys=20.15%, ctx=19, majf=0, minf=1554 00:22:03.329 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:03.329 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:03.329 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:03.329 issued rwts: total=10089,10082,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:03.329 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:03.329 00:22:03.329 Run status group 0 (all jobs): 00:22:03.329 READ: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=39.4MiB (41.3MB), run=2011-2011msec 00:22:03.329 WRITE: bw=19.6MiB/s (20.5MB/s), 19.6MiB/s-19.6MiB/s (20.5MB/s-20.5MB/s), io=39.4MiB (41.3MB), run=2011-2011msec 00:22:03.329 ----------------------------------------------------- 00:22:03.329 Suppressions used: 00:22:03.329 count bytes template 00:22:03.329 1 58 /usr/src/fio/parse.c 00:22:03.329 1 8 libtcmalloc_minimal.so 00:22:03.329 ----------------------------------------------------- 00:22:03.329 00:22:03.329 01:42:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:03.587 01:42:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=c92ec634-3d20-4a94-855c-3a40c1801a9f 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb c92ec634-3d20-4a94-855c-3a40c1801a9f 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=c92ec634-3d20-4a94-855c-3a40c1801a9f 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:22:03.845 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:22:04.104 { 00:22:04.104 "uuid": "1bad4074-c17c-4ba8-86b1-353fc17aa6d1", 00:22:04.104 "name": "lvs_0", 00:22:04.104 "base_bdev": "Nvme0n1", 00:22:04.104 "total_data_clusters": 4, 00:22:04.104 "free_clusters": 0, 00:22:04.104 "block_size": 4096, 00:22:04.104 "cluster_size": 1073741824 00:22:04.104 }, 00:22:04.104 { 00:22:04.104 "uuid": "c92ec634-3d20-4a94-855c-3a40c1801a9f", 00:22:04.104 "name": "lvs_n_0", 00:22:04.104 "base_bdev": "6097d4e5-d852-4cb0-82fe-f63d430eb2f5", 00:22:04.104 "total_data_clusters": 1022, 00:22:04.104 "free_clusters": 1022, 00:22:04.104 "block_size": 4096, 00:22:04.104 "cluster_size": 4194304 00:22:04.104 } 00:22:04.104 ]' 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c92ec634-3d20-4a94-855c-3a40c1801a9f") .free_clusters' 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=1022 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c92ec634-3d20-4a94-855c-3a40c1801a9f") .cluster_size' 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=4088 00:22:04.104 4088 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 4088 00:22:04.104 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 4088 00:22:04.363 6a14634c-cca2-4aa3-8367-b4b3cf995961 00:22:04.622 01:42:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:22:04.880 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:22:04.880 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.3 -s 4420 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1351 -- # break 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:22:05.447 01:42:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.3 trsvcid=4420 ns=1' --bs=4096 00:22:05.447 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:05.447 fio-3.35 00:22:05.447 Starting 1 thread 00:22:07.982 00:22:07.982 test: (groupid=0, jobs=1): err= 0: pid=80872: Sun Nov 17 01:42:16 2024 00:22:07.982 read: IOPS=4523, BW=17.7MiB/s (18.5MB/s)(35.6MiB/2012msec) 00:22:07.982 slat (usec): min=2, max=245, avg= 3.56, stdev= 4.33 00:22:07.982 clat (usec): min=3882, max=27293, avg=14719.94, stdev=1252.68 00:22:07.982 lat (usec): min=3887, max=27296, avg=14723.50, stdev=1252.41 00:22:07.982 clat percentiles (usec): 00:22:07.982 | 1.00th=[12125], 5.00th=[12911], 10.00th=[13304], 20.00th=[13829], 00:22:07.982 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[15008], 00:22:07.982 | 70.00th=[15270], 80.00th=[15664], 90.00th=[16188], 95.00th=[16581], 00:22:07.982 | 99.00th=[17433], 99.50th=[17957], 99.90th=[23987], 99.95th=[24249], 00:22:07.982 | 99.99th=[27395] 00:22:07.982 bw ( KiB/s): min=17136, max=18408, per=99.89%, avg=18074.75, stdev=626.34, samples=4 00:22:07.982 iops : min= 4284, max= 4602, avg=4518.50, stdev=156.46, samples=4 00:22:07.982 write: IOPS=4527, BW=17.7MiB/s (18.5MB/s)(35.6MiB/2012msec); 0 zone resets 00:22:07.982 slat (usec): min=2, max=140, avg= 3.67, stdev= 3.04 00:22:07.982 clat (usec): min=2480, max=26043, avg=13370.70, stdev=1251.19 00:22:07.982 lat (usec): min=2490, max=26046, avg=13374.37, stdev=1251.14 00:22:07.982 clat percentiles (usec): 00:22:07.982 | 1.00th=[10814], 5.00th=[11600], 10.00th=[11994], 20.00th=[12518], 00:22:07.982 | 30.00th=[12780], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:22:07.982 | 70.00th=[13829], 80.00th=[14222], 90.00th=[14746], 95.00th=[15139], 00:22:07.982 | 99.00th=[16057], 99.50th=[17695], 99.90th=[22676], 99.95th=[25822], 00:22:07.982 | 99.99th=[26084] 00:22:07.982 bw ( KiB/s): min=18011, max=18120, per=99.79%, avg=18072.75, stdev=52.28, samples=4 00:22:07.982 iops : min= 4502, max= 4530, avg=4518.00, stdev=13.37, samples=4 00:22:07.982 lat (msec) : 4=0.01%, 10=0.36%, 20=99.33%, 50=0.30% 00:22:07.982 cpu : usr=74.24%, sys=20.39%, ctx=5, majf=0, minf=1555 00:22:07.982 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:07.982 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:07.982 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:07.982 issued rwts: total=9101,9109,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:07.982 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:07.982 00:22:07.982 Run status group 0 (all jobs): 00:22:07.982 READ: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=35.6MiB (37.3MB), run=2012-2012msec 00:22:07.982 WRITE: bw=17.7MiB/s (18.5MB/s), 17.7MiB/s-17.7MiB/s (18.5MB/s-18.5MB/s), io=35.6MiB (37.3MB), run=2012-2012msec 00:22:07.982 ----------------------------------------------------- 00:22:07.982 Suppressions used: 00:22:07.982 count bytes template 00:22:07.982 1 58 /usr/src/fio/parse.c 00:22:07.982 1 8 libtcmalloc_minimal.so 00:22:07.982 ----------------------------------------------------- 00:22:07.982 00:22:07.982 01:42:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:08.242 01:42:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:22:08.242 01:42:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:22:08.501 01:42:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:22:09.068 01:42:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:22:09.068 01:42:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:22:09.327 01:42:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:10.264 rmmod nvme_tcp 00:22:10.264 rmmod nvme_fabrics 00:22:10.264 rmmod nvme_keyring 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 80572 ']' 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 80572 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 80572 ']' 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 80572 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80572 00:22:10.264 killing process with pid 80572 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80572' 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 80572 00:22:10.264 01:42:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 80572 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.201 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:11.202 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@300 -- # return 0 00:22:11.461 00:22:11.461 real 0m22.056s 00:22:11.461 user 1m34.776s 00:22:11.461 sys 0m4.593s 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.461 ************************************ 00:22:11.461 END TEST nvmf_fio_host 00:22:11.461 ************************************ 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.461 ************************************ 00:22:11.461 START TEST nvmf_failover 00:22:11.461 ************************************ 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:11.461 * Looking for test storage... 00:22:11.461 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:11.461 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:11.720 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:11.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.721 --rc genhtml_branch_coverage=1 00:22:11.721 --rc genhtml_function_coverage=1 00:22:11.721 --rc genhtml_legend=1 00:22:11.721 --rc geninfo_all_blocks=1 00:22:11.721 --rc geninfo_unexecuted_blocks=1 00:22:11.721 00:22:11.721 ' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:11.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.721 --rc genhtml_branch_coverage=1 00:22:11.721 --rc genhtml_function_coverage=1 00:22:11.721 --rc genhtml_legend=1 00:22:11.721 --rc geninfo_all_blocks=1 00:22:11.721 --rc geninfo_unexecuted_blocks=1 00:22:11.721 00:22:11.721 ' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:11.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.721 --rc genhtml_branch_coverage=1 00:22:11.721 --rc genhtml_function_coverage=1 00:22:11.721 --rc genhtml_legend=1 00:22:11.721 --rc geninfo_all_blocks=1 00:22:11.721 --rc geninfo_unexecuted_blocks=1 00:22:11.721 00:22:11.721 ' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:11.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.721 --rc genhtml_branch_coverage=1 00:22:11.721 --rc genhtml_function_coverage=1 00:22:11.721 --rc genhtml_legend=1 00:22:11.721 --rc geninfo_all_blocks=1 00:22:11.721 --rc geninfo_unexecuted_blocks=1 00:22:11.721 00:22:11.721 ' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:11.721 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:11.721 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:11.722 01:42:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:11.722 Cannot find device "nvmf_init_br" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:11.722 Cannot find device "nvmf_init_br2" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:11.722 Cannot find device "nvmf_tgt_br" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@164 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:11.722 Cannot find device "nvmf_tgt_br2" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@165 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:11.722 Cannot find device "nvmf_init_br" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@166 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:11.722 Cannot find device "nvmf_init_br2" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@167 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:11.722 Cannot find device "nvmf_tgt_br" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@168 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:11.722 Cannot find device "nvmf_tgt_br2" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:11.722 Cannot find device "nvmf_br" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@170 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:11.722 Cannot find device "nvmf_init_if" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:11.722 Cannot find device "nvmf_init_if2" 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:11.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@173 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:11.722 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@174 -- # true 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:11.722 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:11.982 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:11.982 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:11.982 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.073 ms 00:22:11.983 00:22:11.983 --- 10.0.0.3 ping statistics --- 00:22:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.983 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:11.983 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:11.983 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:22:11.983 00:22:11.983 --- 10.0.0.4 ping statistics --- 00:22:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.983 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:11.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:11.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:22:11.983 00:22:11.983 --- 10.0.0.1 ping statistics --- 00:22:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.983 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:11.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:11.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.056 ms 00:22:11.983 00:22:11.983 --- 10.0.0.2 ping statistics --- 00:22:11.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:11.983 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@461 -- # return 0 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=81165 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 81165 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81165 ']' 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:11.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:11.983 01:42:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:12.242 [2024-11-17 01:42:20.465654] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:12.242 [2024-11-17 01:42:20.465786] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:12.242 [2024-11-17 01:42:20.631553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:12.502 [2024-11-17 01:42:20.714569] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:12.502 [2024-11-17 01:42:20.714865] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:12.502 [2024-11-17 01:42:20.714896] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:12.502 [2024-11-17 01:42:20.714908] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:12.502 [2024-11-17 01:42:20.714923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:12.502 [2024-11-17 01:42:20.716674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:12.502 [2024-11-17 01:42:20.716766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:12.502 [2024-11-17 01:42:20.716779] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:12.502 [2024-11-17 01:42:20.870057] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:13.070 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:13.329 [2024-11-17 01:42:21.678997] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:13.329 01:42:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:13.588 Malloc0 00:22:13.847 01:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:13.847 01:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:14.106 01:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:14.366 [2024-11-17 01:42:22.737672] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:14.366 01:42:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:14.625 [2024-11-17 01:42:23.029865] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:14.625 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:14.885 [2024-11-17 01:42:23.246009] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:14.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=81224 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 81224 /var/tmp/bdevperf.sock 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81224 ']' 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:14.885 01:42:23 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:15.837 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:15.837 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:15.837 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:16.127 NVMe0n1 00:22:16.127 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:16.705 00:22:16.705 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=81248 00:22:16.705 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:16.705 01:42:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:17.644 01:42:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:17.903 01:42:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:21.196 01:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:21.196 00:22:21.196 01:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:21.455 01:42:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:24.745 01:42:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:22:24.745 [2024-11-17 01:42:33.112565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:24.745 01:42:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:26.123 01:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:26.123 01:42:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 81248 00:22:32.697 { 00:22:32.697 "results": [ 00:22:32.697 { 00:22:32.697 "job": "NVMe0n1", 00:22:32.697 "core_mask": "0x1", 00:22:32.697 "workload": "verify", 00:22:32.697 "status": "finished", 00:22:32.697 "verify_range": { 00:22:32.697 "start": 0, 00:22:32.697 "length": 16384 00:22:32.697 }, 00:22:32.697 "queue_depth": 128, 00:22:32.697 "io_size": 4096, 00:22:32.697 "runtime": 15.009251, 00:22:32.697 "iops": 8101.470219933027, 00:22:32.697 "mibps": 31.646368046613386, 00:22:32.697 "io_failed": 3205, 00:22:32.697 "io_timeout": 0, 00:22:32.697 "avg_latency_us": 15362.439748314056, 00:22:32.697 "min_latency_us": 636.7418181818182, 00:22:32.697 "max_latency_us": 17635.14181818182 00:22:32.697 } 00:22:32.697 ], 00:22:32.697 "core_count": 1 00:22:32.697 } 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 81224 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81224 ']' 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81224 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81224 00:22:32.697 killing process with pid 81224 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81224' 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81224 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81224 00:22:32.697 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:32.697 [2024-11-17 01:42:23.340405] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:32.697 [2024-11-17 01:42:23.340580] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81224 ] 00:22:32.697 [2024-11-17 01:42:23.508338] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.697 [2024-11-17 01:42:23.596449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.697 [2024-11-17 01:42:23.760583] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:32.697 Running I/O for 15 seconds... 00:22:32.697 6309.00 IOPS, 24.64 MiB/s [2024-11-17T01:42:41.156Z] [2024-11-17 01:42:26.182857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:59488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.182943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.182995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:59496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.183018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:59504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.183065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:59512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.183109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:59520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.183151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:59528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.183192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:59536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.697 [2024-11-17 01:42:26.183233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.697 [2024-11-17 01:42:26.183274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.697 [2024-11-17 01:42:26.183315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:58560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.697 [2024-11-17 01:42:26.183356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.697 [2024-11-17 01:42:26.183416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.697 [2024-11-17 01:42:26.183443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:58592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:58600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:58608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:58616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:58624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.183940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.183974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:59544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.698 [2024-11-17 01:42:26.184029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:59552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.698 [2024-11-17 01:42:26.184081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.698 [2024-11-17 01:42:26.184424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.184961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.184994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.698 [2024-11-17 01:42:26.185301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.698 [2024-11-17 01:42:26.185320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.185971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.185997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.186962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.186985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.187004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.187029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.187048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.187071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.699 [2024-11-17 01:42:26.187090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.699 [2024-11-17 01:42:26.187112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.187922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.187968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.700 [2024-11-17 01:42:26.188790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.188809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b780 is same with the state(6) to be set 00:22:32.700 [2024-11-17 01:42:26.188844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.700 [2024-11-17 01:42:26.188861] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.700 [2024-11-17 01:42:26.188876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59480 len:8 PRP1 0x0 PRP2 0x0 00:22:32.700 [2024-11-17 01:42:26.188897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.700 [2024-11-17 01:42:26.189134] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:32.701 [2024-11-17 01:42:26.189206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.701 [2024-11-17 01:42:26.189241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:26.189262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.701 [2024-11-17 01:42:26.189280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:26.189297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.701 [2024-11-17 01:42:26.189314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:26.189332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.701 [2024-11-17 01:42:26.189349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:26.189373] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:32.701 [2024-11-17 01:42:26.189447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:32.701 [2024-11-17 01:42:26.193151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:32.701 [2024-11-17 01:42:26.220401] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:32.701 6955.50 IOPS, 27.17 MiB/s [2024-11-17T01:42:41.160Z] 7425.67 IOPS, 29.01 MiB/s [2024-11-17T01:42:41.160Z] 7657.50 IOPS, 29.91 MiB/s [2024-11-17T01:42:41.160Z] [2024-11-17 01:42:29.808379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:43976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.808470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.808554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:43992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.808594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:44000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.808631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:44008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.808668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:44016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.808706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:43464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.808743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:43472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.808780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:43480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.808848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:43488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.808889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:43496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.808927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:43504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.808964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.808983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:43520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:43528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:43536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:43552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:43560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:43568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:43576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:43584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.701 [2024-11-17 01:42:29.809362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:44024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.809399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:44032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.809436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:44040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.809473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.809510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.701 [2024-11-17 01:42:29.809538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.701 [2024-11-17 01:42:29.809557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:44064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.809595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:44072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.809631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.809668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:44088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.809704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.809741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.809795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:43600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.809864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:43608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.809901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:43616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.809938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:43624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.809976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.809995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:43632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.810013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:43640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.810060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:43648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.810099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:44104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:44112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:44120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:44128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:44136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:44144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:44160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:44176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:44184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:44208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:44216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:44224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:44232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:44240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:44248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:44264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.810967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:44280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.702 [2024-11-17 01:42:29.810985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.811003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:43656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.811021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.811048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.811068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.811088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.702 [2024-11-17 01:42:29.811106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.702 [2024-11-17 01:42:29.811125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:43680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.811142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.811178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:43696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.811215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.811251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:43712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.811287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:44288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:44296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:44304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:44312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:44320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:44328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:44336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:44344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:44360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:44376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:44384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:44392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:44400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:44408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.703 [2024-11-17 01:42:29.811962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.811995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:43720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:43736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:43744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:43752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:43760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:43768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:43784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:43792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:43800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:43816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:43824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:43832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:43840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:43848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:43856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.703 [2024-11-17 01:42:29.812766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:43864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.703 [2024-11-17 01:42:29.812784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.812803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:43872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.812821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.812840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:43880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.812858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.812891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:43888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.812910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.812929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:43896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.812947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.812968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:43904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.812985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:44416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:44424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:44432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:44440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:44448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:44456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:44464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:44472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:44480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:29.813333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:43912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:43920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:43928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:43936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:43944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:43952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:43960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.704 [2024-11-17 01:42:29.813592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ba00 is same with the state(6) to be set 00:22:32.704 [2024-11-17 01:42:29.813641] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.704 [2024-11-17 01:42:29.813657] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.704 [2024-11-17 01:42:29.813674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43968 len:8 PRP1 0x0 PRP2 0x0 00:22:32.704 [2024-11-17 01:42:29.813691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.813960] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.3:4421 to 10.0.0.3:4422 00:22:32.704 [2024-11-17 01:42:29.814033] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.704 [2024-11-17 01:42:29.814061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.814081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.704 [2024-11-17 01:42:29.814099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.814117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.704 [2024-11-17 01:42:29.814136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.814154] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.704 [2024-11-17 01:42:29.814172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:29.814189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:32.704 [2024-11-17 01:42:29.814271] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:32.704 [2024-11-17 01:42:29.817899] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:32.704 [2024-11-17 01:42:29.842636] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:32.704 7699.20 IOPS, 30.07 MiB/s [2024-11-17T01:42:41.163Z] 7816.00 IOPS, 30.53 MiB/s [2024-11-17T01:42:41.163Z] 7902.86 IOPS, 30.87 MiB/s [2024-11-17T01:42:41.163Z] 7956.00 IOPS, 31.08 MiB/s [2024-11-17T01:42:41.163Z] 7989.33 IOPS, 31.21 MiB/s [2024-11-17T01:42:41.163Z] [2024-11-17 01:42:34.407555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.407671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.407715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.407738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.407761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.407780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.407801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:94024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.407839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.407889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:94032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.407911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.407932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:94040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.407966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.407986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.408004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.408039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:94056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.408056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.408076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:94064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.704 [2024-11-17 01:42:34.408109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.704 [2024-11-17 01:42:34.408128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:94080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:94104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:94120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.408380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:93488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:93496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:93512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:93552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:93560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:93568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.408977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:93576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.408995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:93584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.409034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:93592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.409085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:93600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.409139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:93608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.409176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:94152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:94176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:94184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:94200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:94216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:94240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.705 [2024-11-17 01:42:34.409789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:93616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.705 [2024-11-17 01:42:34.409859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.705 [2024-11-17 01:42:34.409895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:93624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.409916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.409936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.409954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.409974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:93640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.409993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:93648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:93664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:93680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:93688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:93696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:93704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:93712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:93720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:93728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:93736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.410482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:94272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:94280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:94296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.410976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.410997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.411015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.411055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.411093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.706 [2024-11-17 01:42:34.411155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.411195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.411247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.411284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.706 [2024-11-17 01:42:34.411302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:93768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.706 [2024-11-17 01:42:34.411319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:93776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:93784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:93808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:93824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:93832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:93840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:93864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.411880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.411921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.411961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.411983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.412017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.412055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.412093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.412132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.412171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:32.707 [2024-11-17 01:42:34.412210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:93920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:93928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:93960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:93976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:93984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:32.707 [2024-11-17 01:42:34.412844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002c180 is same with the state(6) to be set 00:22:32.707 [2024-11-17 01:42:34.412902] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.707 [2024-11-17 01:42:34.412918] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.707 [2024-11-17 01:42:34.412934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93992 len:8 PRP1 0x0 PRP2 0x0 00:22:32.707 [2024-11-17 01:42:34.412953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.707 [2024-11-17 01:42:34.412974] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.707 [2024-11-17 01:42:34.412988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.707 [2024-11-17 01:42:34.413003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94448 len:8 PRP1 0x0 PRP2 0x0 00:22:32.707 [2024-11-17 01:42:34.413020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413052] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94456 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94464 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413169] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413196] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94472 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413245] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413272] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94480 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413319] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94488 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413391] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94496 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413451] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:32.708 [2024-11-17 01:42:34.413464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:32.708 [2024-11-17 01:42:34.413477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94504 len:8 PRP1 0x0 PRP2 0x0 00:22:32.708 [2024-11-17 01:42:34.413493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413727] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.3:4422 to 10.0.0.3:4420 00:22:32.708 [2024-11-17 01:42:34.413798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.708 [2024-11-17 01:42:34.413855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.708 [2024-11-17 01:42:34.413897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413915] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.708 [2024-11-17 01:42:34.413934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.708 [2024-11-17 01:42:34.413970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.708 [2024-11-17 01:42:34.413987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:32.708 [2024-11-17 01:42:34.417767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:32.708 [2024-11-17 01:42:34.417847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:32.708 [2024-11-17 01:42:34.441773] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:32.708 7980.80 IOPS, 31.18 MiB/s [2024-11-17T01:42:41.167Z] 8015.36 IOPS, 31.31 MiB/s [2024-11-17T01:42:41.167Z] 8044.75 IOPS, 31.42 MiB/s [2024-11-17T01:42:41.167Z] 8065.92 IOPS, 31.51 MiB/s [2024-11-17T01:42:41.167Z] 8085.71 IOPS, 31.58 MiB/s [2024-11-17T01:42:41.167Z] 8101.33 IOPS, 31.65 MiB/s 00:22:32.708 Latency(us) 00:22:32.708 [2024-11-17T01:42:41.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.708 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:32.708 Verification LBA range: start 0x0 length 0x4000 00:22:32.708 NVMe0n1 : 15.01 8101.47 31.65 213.53 0.00 15362.44 636.74 17635.14 00:22:32.708 [2024-11-17T01:42:41.167Z] =================================================================================================================== 00:22:32.708 [2024-11-17T01:42:41.167Z] Total : 8101.47 31.65 213.53 0.00 15362.44 636.74 17635.14 00:22:32.708 Received shutdown signal, test time was about 15.000000 seconds 00:22:32.708 00:22:32.708 Latency(us) 00:22:32.708 [2024-11-17T01:42:41.167Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.708 [2024-11-17T01:42:41.167Z] =================================================================================================================== 00:22:32.708 [2024-11-17T01:42:41.167Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:32.708 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=81433 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 81433 /var/tmp/bdevperf.sock 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 81433 ']' 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:32.708 01:42:40 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:33.645 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:33.645 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:33.645 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:22:33.905 [2024-11-17 01:42:42.241669] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:33.905 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4422 00:22:34.164 [2024-11-17 01:42:42.485861] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4422 *** 00:22:34.165 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:34.423 NVMe0n1 00:22:34.423 01:42:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:34.682 00:22:34.682 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:34.941 00:22:34.941 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:34.941 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:35.201 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:35.768 01:42:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:39.052 01:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:39.052 01:42:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.052 01:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:39.052 01:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=81510 00:22:39.052 01:42:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 81510 00:22:39.991 { 00:22:39.991 "results": [ 00:22:39.991 { 00:22:39.991 "job": "NVMe0n1", 00:22:39.991 "core_mask": "0x1", 00:22:39.991 "workload": "verify", 00:22:39.991 "status": "finished", 00:22:39.991 "verify_range": { 00:22:39.991 "start": 0, 00:22:39.991 "length": 16384 00:22:39.991 }, 00:22:39.991 "queue_depth": 128, 00:22:39.991 "io_size": 4096, 00:22:39.991 "runtime": 1.016322, 00:22:39.991 "iops": 6443.823906202955, 00:22:39.991 "mibps": 25.171187133605294, 00:22:39.991 "io_failed": 0, 00:22:39.991 "io_timeout": 0, 00:22:39.991 "avg_latency_us": 19787.345641111064, 00:22:39.991 "min_latency_us": 2964.0145454545454, 00:22:39.991 "max_latency_us": 16681.890909090907 00:22:39.991 } 00:22:39.991 ], 00:22:39.991 "core_count": 1 00:22:39.991 } 00:22:39.991 01:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:39.991 [2024-11-17 01:42:41.062066] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:39.991 [2024-11-17 01:42:41.062233] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81433 ] 00:22:39.991 [2024-11-17 01:42:41.242908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.991 [2024-11-17 01:42:41.332328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.991 [2024-11-17 01:42:41.487252] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:39.991 [2024-11-17 01:42:43.894137] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.3:4420 to 10.0.0.3:4421 00:22:39.991 [2024-11-17 01:42:43.894294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.991 [2024-11-17 01:42:43.894327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.991 [2024-11-17 01:42:43.894357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.991 [2024-11-17 01:42:43.894376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.991 [2024-11-17 01:42:43.894396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.991 [2024-11-17 01:42:43.894413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.991 [2024-11-17 01:42:43.894434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.991 [2024-11-17 01:42:43.894452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.991 [2024-11-17 01:42:43.894478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:22:39.991 [2024-11-17 01:42:43.894551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:22:39.991 [2024-11-17 01:42:43.894602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:22:39.991 [2024-11-17 01:42:43.907173] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:22:39.991 Running I/O for 1 seconds... 00:22:39.991 6421.00 IOPS, 25.08 MiB/s 00:22:39.992 Latency(us) 00:22:39.992 [2024-11-17T01:42:48.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:39.992 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:39.992 Verification LBA range: start 0x0 length 0x4000 00:22:39.992 NVMe0n1 : 1.02 6443.82 25.17 0.00 0.00 19787.35 2964.01 16681.89 00:22:39.992 [2024-11-17T01:42:48.451Z] =================================================================================================================== 00:22:39.992 [2024-11-17T01:42:48.451Z] Total : 6443.82 25.17 0.00 0.00 19787.35 2964.01 16681.89 00:22:39.992 01:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:39.992 01:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:22:40.250 01:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:40.509 01:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:40.509 01:42:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:22:40.768 01:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:41.026 01:42:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 81433 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81433 ']' 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81433 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81433 00:22:44.314 killing process with pid 81433 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81433' 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81433 00:22:44.314 01:42:52 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81433 00:22:45.252 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:22:45.252 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:45.511 rmmod nvme_tcp 00:22:45.511 rmmod nvme_fabrics 00:22:45.511 rmmod nvme_keyring 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 81165 ']' 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 81165 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 81165 ']' 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 81165 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81165 00:22:45.511 killing process with pid 81165 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81165' 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 81165 00:22:45.511 01:42:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 81165 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:46.449 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:46.706 01:42:54 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:46.706 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:46.706 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:46.706 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:46.706 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:46.706 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.706 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@300 -- # return 0 00:22:46.707 ************************************ 00:22:46.707 END TEST nvmf_failover 00:22:46.707 ************************************ 00:22:46.707 00:22:46.707 real 0m35.308s 00:22:46.707 user 2m14.882s 00:22:46.707 sys 0m5.704s 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:46.707 ************************************ 00:22:46.707 START TEST nvmf_host_discovery 00:22:46.707 ************************************ 00:22:46.707 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:46.966 * Looking for test storage... 00:22:46.966 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.966 --rc genhtml_branch_coverage=1 00:22:46.966 --rc genhtml_function_coverage=1 00:22:46.966 --rc genhtml_legend=1 00:22:46.966 --rc geninfo_all_blocks=1 00:22:46.966 --rc geninfo_unexecuted_blocks=1 00:22:46.966 00:22:46.966 ' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.966 --rc genhtml_branch_coverage=1 00:22:46.966 --rc genhtml_function_coverage=1 00:22:46.966 --rc genhtml_legend=1 00:22:46.966 --rc geninfo_all_blocks=1 00:22:46.966 --rc geninfo_unexecuted_blocks=1 00:22:46.966 00:22:46.966 ' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.966 --rc genhtml_branch_coverage=1 00:22:46.966 --rc genhtml_function_coverage=1 00:22:46.966 --rc genhtml_legend=1 00:22:46.966 --rc geninfo_all_blocks=1 00:22:46.966 --rc geninfo_unexecuted_blocks=1 00:22:46.966 00:22:46.966 ' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:46.966 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:46.966 --rc genhtml_branch_coverage=1 00:22:46.966 --rc genhtml_function_coverage=1 00:22:46.966 --rc genhtml_legend=1 00:22:46.966 --rc geninfo_all_blocks=1 00:22:46.966 --rc geninfo_unexecuted_blocks=1 00:22:46.966 00:22:46.966 ' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:46.966 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:46.967 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:46.967 Cannot find device "nvmf_init_br" 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # true 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:46.967 Cannot find device "nvmf_init_br2" 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # true 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:46.967 Cannot find device "nvmf_tgt_br" 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@164 -- # true 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:46.967 Cannot find device "nvmf_tgt_br2" 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@165 -- # true 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:46.967 Cannot find device "nvmf_init_br" 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@166 -- # true 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:46.967 Cannot find device "nvmf_init_br2" 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@167 -- # true 00:22:46.967 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:47.227 Cannot find device "nvmf_tgt_br" 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@168 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:47.227 Cannot find device "nvmf_tgt_br2" 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:47.227 Cannot find device "nvmf_br" 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@170 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:47.227 Cannot find device "nvmf_init_if" 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:47.227 Cannot find device "nvmf_init_if2" 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:47.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@173 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:47.227 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@174 -- # true 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:47.227 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:47.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:47.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:22:47.486 00:22:47.486 --- 10.0.0.3 ping statistics --- 00:22:47.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.486 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:47.486 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:47.486 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:22:47.486 00:22:47.486 --- 10.0.0.4 ping statistics --- 00:22:47.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.486 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:47.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:47.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:47.486 00:22:47.486 --- 10.0.0.1 ping statistics --- 00:22:47.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.486 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:47.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:47.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:22:47.486 00:22:47.486 --- 10.0.0.2 ping statistics --- 00:22:47.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:47.486 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:47.486 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@461 -- # return 0 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=81843 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 81843 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 81843 ']' 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:47.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:47.487 01:42:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:47.487 [2024-11-17 01:42:55.922184] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:47.487 [2024-11-17 01:42:55.922353] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:47.746 [2024-11-17 01:42:56.103725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:47.746 [2024-11-17 01:42:56.187345] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:47.746 [2024-11-17 01:42:56.187410] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:47.746 [2024-11-17 01:42:56.187443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:47.746 [2024-11-17 01:42:56.187480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:47.746 [2024-11-17 01:42:56.187492] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:47.746 [2024-11-17 01:42:56.188689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.005 [2024-11-17 01:42:56.334611] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.573 [2024-11-17 01:42:56.904415] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.573 [2024-11-17 01:42:56.912604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.573 null0 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.573 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 null1 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=81871 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 81871 /tmp/host.sock 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 81871 ']' 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.574 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.574 01:42:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:48.833 [2024-11-17 01:42:57.076135] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:48.833 [2024-11-17 01:42:57.076305] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81871 ] 00:22:48.833 [2024-11-17 01:42:57.254924] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:49.092 [2024-11-17 01:42:57.377976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:49.351 [2024-11-17 01:42:57.566549] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.610 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:49.870 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.130 [2024-11-17 01:42:58.389086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.130 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:50.131 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:50.406 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:22:50.406 01:42:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:22:50.677 [2024-11-17 01:42:59.038499] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:50.677 [2024-11-17 01:42:59.038560] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:50.677 [2024-11-17 01:42:59.038595] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:50.677 [2024-11-17 01:42:59.044569] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:22:50.677 [2024-11-17 01:42:59.099164] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:22:50.677 [2024-11-17 01:42:59.100595] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b280:1 started. 00:22:50.677 [2024-11-17 01:42:59.102628] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:50.678 [2024-11-17 01:42:59.102678] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:50.678 [2024-11-17 01:42:59.107245] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b280 was disconnected and freed. delete nvme_qpair. 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:51.254 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:51.514 [2024-11-17 01:42:59.832498] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:51.514 [2024-11-17 01:42:59.838263] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4421 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 [2024-11-17 01:42:59.947704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:22:51.514 [2024-11-17 01:42:59.948883] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:51.514 [2024-11-17 01:42:59.948946] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:51.514 [2024-11-17 01:42:59.954945] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new path for nvme0 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:51.514 01:42:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:51.773 [2024-11-17 01:43:00.013571] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4421 00:22:51.773 [2024-11-17 01:43:00.013639] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:51.773 [2024-11-17 01:43:00.013658] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:22:51.773 [2024-11-17 01:43:00.013669] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:51.773 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.774 [2024-11-17 01:43:00.177350] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:22:51.774 [2024-11-17 01:43:00.177410] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:51.774 [2024-11-17 01:43:00.181065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.774 [2024-11-17 01:43:00.181111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.774 [2024-11-17 01:43:00.181130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.774 [2024-11-17 01:43:00.181159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.774 [2024-11-17 01:43:00.181171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.774 [2024-11-17 01:43:00.181198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.774 [2024-11-17 01:43:00.181211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:51.774 [2024-11-17 01:43:00.181223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:51.774 [2024-11-17 01:43:00.181234] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ad80 is same with the state(6) to be set 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:51.774 [2024-11-17 01:43:00.183418] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 not found 00:22:51.774 [2024-11-17 01:43:00.183459] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:51.774 [2024-11-17 01:43:00.183551] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ad80 (9): Bad file descriptor 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:51.774 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:22:52.033 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:52.034 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:52.293 01:43:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.229 [2024-11-17 01:43:01.599417] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:22:53.229 [2024-11-17 01:43:01.599450] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:22:53.229 [2024-11-17 01:43:01.599501] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:22:53.229 [2024-11-17 01:43:01.605471] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 new subsystem nvme0 00:22:53.229 [2024-11-17 01:43:01.664073] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.3:4421 00:22:53.229 [2024-11-17 01:43:01.665204] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x61500002c680:1 started. 00:22:53.229 [2024-11-17 01:43:01.667505] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:22:53.229 [2024-11-17 01:43:01.667574] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4421 found again 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:53.229 [2024-11-17 01:43:01.669667] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x61500002c680 was disconnected and freed. delete nvme_qpair. 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.229 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.229 request: 00:22:53.229 { 00:22:53.229 "name": "nvme", 00:22:53.229 "trtype": "tcp", 00:22:53.229 "traddr": "10.0.0.3", 00:22:53.229 "adrfam": "ipv4", 00:22:53.229 "trsvcid": "8009", 00:22:53.488 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:53.489 "wait_for_attach": true, 00:22:53.489 "method": "bdev_nvme_start_discovery", 00:22:53.489 "req_id": 1 00:22:53.489 } 00:22:53.489 Got JSON-RPC error response 00:22:53.489 response: 00:22:53.489 { 00:22:53.489 "code": -17, 00:22:53.489 "message": "File exists" 00:22:53.489 } 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.489 request: 00:22:53.489 { 00:22:53.489 "name": "nvme_second", 00:22:53.489 "trtype": "tcp", 00:22:53.489 "traddr": "10.0.0.3", 00:22:53.489 "adrfam": "ipv4", 00:22:53.489 "trsvcid": "8009", 00:22:53.489 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:53.489 "wait_for_attach": true, 00:22:53.489 "method": "bdev_nvme_start_discovery", 00:22:53.489 "req_id": 1 00:22:53.489 } 00:22:53.489 Got JSON-RPC error response 00:22:53.489 response: 00:22:53.489 { 00:22:53.489 "code": -17, 00:22:53.489 "message": "File exists" 00:22:53.489 } 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.3 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:53.489 01:43:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:54.864 [2024-11-17 01:43:02.928089] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:54.865 [2024-11-17 01:43:02.928192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002c900 with addr=10.0.0.3, port=8010 00:22:54.865 [2024-11-17 01:43:02.928244] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:54.865 [2024-11-17 01:43:02.928259] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:54.865 [2024-11-17 01:43:02.928272] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:55.801 [2024-11-17 01:43:03.928102] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:22:55.801 [2024-11-17 01:43:03.928185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002cb80 with addr=10.0.0.3, port=8010 00:22:55.801 [2024-11-17 01:43:03.928235] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:55.801 [2024-11-17 01:43:03.928248] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:55.801 [2024-11-17 01:43:03.928260] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] could not start discovery connect 00:22:56.738 [2024-11-17 01:43:04.927904] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.3:8010] timed out while attaching discovery ctrlr 00:22:56.738 request: 00:22:56.738 { 00:22:56.738 "name": "nvme_second", 00:22:56.738 "trtype": "tcp", 00:22:56.738 "traddr": "10.0.0.3", 00:22:56.738 "adrfam": "ipv4", 00:22:56.738 "trsvcid": "8010", 00:22:56.738 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:56.738 "wait_for_attach": false, 00:22:56.738 "attach_timeout_ms": 3000, 00:22:56.738 "method": "bdev_nvme_start_discovery", 00:22:56.738 "req_id": 1 00:22:56.738 } 00:22:56.738 Got JSON-RPC error response 00:22:56.738 response: 00:22:56.738 { 00:22:56.738 "code": -110, 00:22:56.738 "message": "Connection timed out" 00:22:56.738 } 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 81871 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:56.738 01:43:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:22:56.738 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:56.738 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:22:56.738 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:56.738 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:56.738 rmmod nvme_tcp 00:22:56.738 rmmod nvme_fabrics 00:22:56.738 rmmod nvme_keyring 00:22:56.738 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:56.738 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 81843 ']' 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 81843 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 81843 ']' 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 81843 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81843 00:22:56.739 killing process with pid 81843 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81843' 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 81843 00:22:56.739 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 81843 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:22:57.676 01:43:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:22:57.676 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@246 -- # remove_spdk_ns 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@300 -- # return 0 00:22:57.935 00:22:57.935 real 0m11.068s 00:22:57.935 user 0m20.845s 00:22:57.935 sys 0m1.987s 00:22:57.935 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:22:57.936 ************************************ 00:22:57.936 END TEST nvmf_host_discovery 00:22:57.936 ************************************ 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:57.936 ************************************ 00:22:57.936 START TEST nvmf_host_multipath_status 00:22:57.936 ************************************ 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:22:57.936 * Looking for test storage... 00:22:57.936 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:22:57.936 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.196 --rc genhtml_branch_coverage=1 00:22:58.196 --rc genhtml_function_coverage=1 00:22:58.196 --rc genhtml_legend=1 00:22:58.196 --rc geninfo_all_blocks=1 00:22:58.196 --rc geninfo_unexecuted_blocks=1 00:22:58.196 00:22:58.196 ' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.196 --rc genhtml_branch_coverage=1 00:22:58.196 --rc genhtml_function_coverage=1 00:22:58.196 --rc genhtml_legend=1 00:22:58.196 --rc geninfo_all_blocks=1 00:22:58.196 --rc geninfo_unexecuted_blocks=1 00:22:58.196 00:22:58.196 ' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.196 --rc genhtml_branch_coverage=1 00:22:58.196 --rc genhtml_function_coverage=1 00:22:58.196 --rc genhtml_legend=1 00:22:58.196 --rc geninfo_all_blocks=1 00:22:58.196 --rc geninfo_unexecuted_blocks=1 00:22:58.196 00:22:58.196 ' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:58.196 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:58.196 --rc genhtml_branch_coverage=1 00:22:58.196 --rc genhtml_function_coverage=1 00:22:58.196 --rc genhtml_legend=1 00:22:58.196 --rc geninfo_all_blocks=1 00:22:58.196 --rc geninfo_unexecuted_blocks=1 00:22:58.196 00:22:58.196 ' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.196 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:58.197 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@460 -- # nvmf_veth_init 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:22:58.197 Cannot find device "nvmf_init_br" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:22:58.197 Cannot find device "nvmf_init_br2" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:22:58.197 Cannot find device "nvmf_tgt_br" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@164 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:22:58.197 Cannot find device "nvmf_tgt_br2" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@165 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:22:58.197 Cannot find device "nvmf_init_br" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@166 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:22:58.197 Cannot find device "nvmf_init_br2" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@167 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:22:58.197 Cannot find device "nvmf_tgt_br" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@168 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:22:58.197 Cannot find device "nvmf_tgt_br2" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:22:58.197 Cannot find device "nvmf_br" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@170 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:22:58.197 Cannot find device "nvmf_init_if" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:22:58.197 Cannot find device "nvmf_init_if2" 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:58.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@173 -- # true 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:58.197 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:58.197 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@174 -- # true 00:22:58.198 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:22:58.198 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:58.198 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:22:58.198 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:22:58.457 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:58.457 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:22:58.457 00:22:58.457 --- 10.0.0.3 ping statistics --- 00:22:58.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.457 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:22:58.457 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:22:58.457 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.058 ms 00:22:58.457 00:22:58.457 --- 10.0.0.4 ping statistics --- 00:22:58.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.457 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:58.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:58.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:22:58.457 00:22:58.457 --- 10.0.0.1 ping statistics --- 00:22:58.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.457 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:22:58.457 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:22:58.458 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:58.458 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:22:58.458 00:22:58.458 --- 10.0.0.2 ping statistics --- 00:22:58.458 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:58.458 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@461 -- # return 0 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=82388 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 82388 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82388 ']' 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:58.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:58.458 01:43:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:58.717 [2024-11-17 01:43:07.026295] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:22:58.717 [2024-11-17 01:43:07.026462] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:58.976 [2024-11-17 01:43:07.208085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:58.976 [2024-11-17 01:43:07.290916] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:58.976 [2024-11-17 01:43:07.290972] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:58.976 [2024-11-17 01:43:07.291005] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:58.976 [2024-11-17 01:43:07.291025] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:58.976 [2024-11-17 01:43:07.291038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:58.976 [2024-11-17 01:43:07.292668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:58.976 [2024-11-17 01:43:07.292688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.234 [2024-11-17 01:43:07.446206] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:22:59.802 01:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:59.802 01:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:22:59.802 01:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:59.802 01:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:59.802 01:43:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:59.802 01:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:59.802 01:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=82388 00:22:59.802 01:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:59.802 [2024-11-17 01:43:08.244007] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.061 01:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:00.320 Malloc0 00:23:00.320 01:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:00.578 01:43:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:00.837 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:01.095 [2024-11-17 01:43:09.341377] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:01.095 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:23:01.354 [2024-11-17 01:43:09.565527] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=82448 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 82448 /var/tmp/bdevperf.sock 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 82448 ']' 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.354 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:01.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.355 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.355 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:01.355 01:43:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:02.291 01:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:02.291 01:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:02.291 01:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:02.550 01:43:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:02.809 Nvme0n1 00:23:02.809 01:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:03.068 Nvme0n1 00:23:03.068 01:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:03.068 01:43:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.603 01:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:05.603 01:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:05.603 01:43:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:05.862 01:43:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:06.799 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:06.799 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:06.799 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:06.799 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:07.058 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.058 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:07.058 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:07.058 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.317 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:07.317 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:07.317 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:07.317 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.576 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.576 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:07.576 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:07.576 01:43:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:07.836 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:07.836 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:07.836 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:07.836 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.095 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.095 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:08.095 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:08.095 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:08.355 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:08.355 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:08.355 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:08.615 01:43:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:08.874 01:43:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:09.819 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:09.819 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:09.819 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:09.819 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.078 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:10.078 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:10.078 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.078 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:10.336 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.336 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:10.336 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.336 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:10.595 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.595 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:10.595 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.595 01:43:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:10.859 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:10.859 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:10.859 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:10.859 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:11.121 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.121 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:11.121 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:11.121 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:11.379 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:11.379 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:11.379 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:11.638 01:43:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:11.896 01:43:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:12.832 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:12.832 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:12.832 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:12.832 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:13.091 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.091 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:13.091 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:13.091 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.350 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:13.350 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:13.350 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.350 01:43:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:13.918 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:13.918 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:13.918 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:13.918 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:14.177 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.177 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:14.177 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:14.177 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.177 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.177 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:14.178 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:14.178 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:14.747 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:14.747 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:14.747 01:43:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:14.747 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:15.006 01:43:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.417 01:43:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:16.681 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:16.681 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:16.681 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.681 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:16.940 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:16.940 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:16.940 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:16.940 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:17.199 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.199 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:17.199 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:17.199 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.458 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:17.458 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:17.458 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:17.459 01:43:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:17.718 01:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:17.718 01:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:17.718 01:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:17.977 01:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:18.237 01:43:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:19.175 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:19.175 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:19.175 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.175 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.435 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.435 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:19.435 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.435 01:43:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:19.694 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:19.694 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:19.694 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.694 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:19.955 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.955 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:19.955 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.955 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.214 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.214 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:20.214 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.214 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.473 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.473 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:20.473 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.473 01:43:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.733 01:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.733 01:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:20.733 01:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:20.991 01:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:21.250 01:43:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.629 01:43:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.889 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.889 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.889 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.889 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.149 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.149 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:23.149 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.149 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.408 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.408 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:23.408 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.408 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.667 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:23.667 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:23.667 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.667 01:43:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.927 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.927 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:24.186 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:24.186 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:24.446 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:24.705 01:43:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:25.643 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:25.643 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:25.643 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:25.643 01:43:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.902 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.902 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:25.902 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.902 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:26.162 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.162 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:26.162 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.162 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:26.421 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.421 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:26.421 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.421 01:43:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:26.680 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.680 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:26.680 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.680 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:26.939 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.939 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:26.939 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:26.939 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.198 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.198 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:27.198 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:27.458 01:43:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:23:27.717 01:43:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:28.654 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:28.654 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:28.654 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.654 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:28.913 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.913 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:28.913 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.913 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:29.172 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.172 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:29.172 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.172 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:29.431 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.431 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:29.431 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.431 01:43:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:29.691 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.691 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:29.691 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.691 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:29.950 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:29.950 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:29.950 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:29.950 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:30.209 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.209 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:30.209 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:30.468 01:43:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n non_optimized 00:23:30.726 01:43:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:31.660 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:31.661 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:31.661 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.661 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:31.919 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.919 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:31.919 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.919 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:32.488 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.488 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:32.488 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.488 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:32.747 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.747 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:32.747 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:32.747 01:43:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.747 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:32.747 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:32.747 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:32.747 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:33.006 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.006 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:33.006 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.006 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:33.266 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.266 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:33.266 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:33.525 01:43:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:23:33.784 01:43:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.162 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:35.422 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:35.422 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:35.422 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.422 01:43:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:35.681 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.681 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:35.681 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.681 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:35.940 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:35.940 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:35.940 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.940 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:36.199 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.199 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:36.199 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:36.199 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 82448 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82448 ']' 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82448 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82448 00:23:36.459 killing process with pid 82448 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82448' 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82448 00:23:36.459 01:43:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82448 00:23:36.459 { 00:23:36.459 "results": [ 00:23:36.459 { 00:23:36.459 "job": "Nvme0n1", 00:23:36.459 "core_mask": "0x4", 00:23:36.459 "workload": "verify", 00:23:36.459 "status": "terminated", 00:23:36.459 "verify_range": { 00:23:36.459 "start": 0, 00:23:36.459 "length": 16384 00:23:36.459 }, 00:23:36.459 "queue_depth": 128, 00:23:36.459 "io_size": 4096, 00:23:36.459 "runtime": 33.134059, 00:23:36.459 "iops": 7886.5978961406445, 00:23:36.459 "mibps": 30.807023031799392, 00:23:36.459 "io_failed": 0, 00:23:36.459 "io_timeout": 0, 00:23:36.459 "avg_latency_us": 16198.625008883391, 00:23:36.459 "min_latency_us": 212.24727272727273, 00:23:36.459 "max_latency_us": 4026531.84 00:23:36.459 } 00:23:36.459 ], 00:23:36.459 "core_count": 1 00:23:36.459 } 00:23:37.403 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 82448 00:23:37.403 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:37.403 [2024-11-17 01:43:09.695489] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:37.403 [2024-11-17 01:43:09.695694] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82448 ] 00:23:37.403 [2024-11-17 01:43:09.882312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.403 [2024-11-17 01:43:10.004750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:37.403 [2024-11-17 01:43:10.178645] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:37.403 Running I/O for 90 seconds... 00:23:37.403 7772.00 IOPS, 30.36 MiB/s [2024-11-17T01:43:45.862Z] 8258.00 IOPS, 32.26 MiB/s [2024-11-17T01:43:45.862Z] 8393.33 IOPS, 32.79 MiB/s [2024-11-17T01:43:45.862Z] 8475.00 IOPS, 33.11 MiB/s [2024-11-17T01:43:45.862Z] 8478.40 IOPS, 33.12 MiB/s [2024-11-17T01:43:45.862Z] 8297.50 IOPS, 32.41 MiB/s [2024-11-17T01:43:45.862Z] 8097.71 IOPS, 31.63 MiB/s [2024-11-17T01:43:45.862Z] 7919.12 IOPS, 30.93 MiB/s [2024-11-17T01:43:45.862Z] 7877.22 IOPS, 30.77 MiB/s [2024-11-17T01:43:45.862Z] 7966.60 IOPS, 31.12 MiB/s [2024-11-17T01:43:45.862Z] 8022.00 IOPS, 31.34 MiB/s [2024-11-17T01:43:45.862Z] 8065.75 IOPS, 31.51 MiB/s [2024-11-17T01:43:45.862Z] 8118.31 IOPS, 31.71 MiB/s [2024-11-17T01:43:45.862Z] 8145.07 IOPS, 31.82 MiB/s [2024-11-17T01:43:45.862Z] [2024-11-17 01:43:26.264306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:39504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.403 [2024-11-17 01:43:26.264423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:37.403 [2024-11-17 01:43:26.264501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:39512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.403 [2024-11-17 01:43:26.264528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:39520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:39528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:39536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:39544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:39552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:39560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:39568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:39576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.264961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:39584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.264981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:39592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:39600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:39608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:39616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:39632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:39648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:39656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:39664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:39248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.404 [2024-11-17 01:43:26.265848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:39672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:39680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.265943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.265993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:39688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:39696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:39704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:39712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:39720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:39728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:39736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:39744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:39752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.404 [2024-11-17 01:43:26.266525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:37.404 [2024-11-17 01:43:26.266551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:39760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.266571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:39768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.266617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:39776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.266663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:39784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.266708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:39792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.266768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.266857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.266909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.266966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.266994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.267014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.267062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.267110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.267170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.267216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:39800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:39808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:39816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:39824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:39832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:39840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:39848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:39856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:39872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:39880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:39888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:39896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:39904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.267948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.267990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:39912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.268030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:39920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.268082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:39928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.268129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:39936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.268188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:39944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.268235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:39952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.405 [2024-11-17 01:43:26.268282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.268329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.268376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.268422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.268469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.268522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.405 [2024-11-17 01:43:26.268571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:37.405 [2024-11-17 01:43:26.268598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.268618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.268665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:39960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.268713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:39968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.268769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:39976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.268833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:39984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.268883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:39992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.268930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.268956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.268976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.269593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.269917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.269937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.270700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.406 [2024-11-17 01:43:26.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.270804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:40104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.270829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.270865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:40112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.270887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.270921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:40120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.270945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.270979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:40128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.270999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:40136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:40144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:40152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:40160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:40168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:40176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.406 [2024-11-17 01:43:26.271377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:40184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.406 [2024-11-17 01:43:26.271397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:40192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:40200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:40208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:40216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:40224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:40232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:40240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:26.271877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:40248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:26.271901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:37.407 7972.93 IOPS, 31.14 MiB/s [2024-11-17T01:43:45.866Z] 7474.62 IOPS, 29.20 MiB/s [2024-11-17T01:43:45.866Z] 7034.94 IOPS, 27.48 MiB/s [2024-11-17T01:43:45.866Z] 6644.11 IOPS, 25.95 MiB/s [2024-11-17T01:43:45.866Z] 6450.63 IOPS, 25.20 MiB/s [2024-11-17T01:43:45.866Z] 6551.70 IOPS, 25.59 MiB/s [2024-11-17T01:43:45.866Z] 6641.19 IOPS, 25.94 MiB/s [2024-11-17T01:43:45.866Z] 6861.14 IOPS, 26.80 MiB/s [2024-11-17T01:43:45.866Z] 7068.39 IOPS, 27.61 MiB/s [2024-11-17T01:43:45.866Z] 7250.17 IOPS, 28.32 MiB/s [2024-11-17T01:43:45.866Z] 7323.44 IOPS, 28.61 MiB/s [2024-11-17T01:43:45.866Z] 7363.62 IOPS, 28.76 MiB/s [2024-11-17T01:43:45.866Z] 7397.52 IOPS, 28.90 MiB/s [2024-11-17T01:43:45.866Z] 7499.82 IOPS, 29.30 MiB/s [2024-11-17T01:43:45.866Z] 7641.90 IOPS, 29.85 MiB/s [2024-11-17T01:43:45.866Z] 7763.93 IOPS, 30.33 MiB/s [2024-11-17T01:43:45.866Z] [2024-11-17 01:43:42.201758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:81352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.201883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.201935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.201959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.201991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.202790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.202965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.202986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.203033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.203081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.203129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.407 [2024-11-17 01:43:42.203191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.203268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.203312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:37.407 [2024-11-17 01:43:42.203338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.407 [2024-11-17 01:43:42.203356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.203413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.203457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.203502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.203546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.203591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.203681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.203730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.203780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.203872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.203924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.203952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.203973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.204016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.204036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.204065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.204095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.204156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.204176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.204218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.204237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.205616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.205675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.205724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.205772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.205818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.205880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.205929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.205956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.205975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.206022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.206084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.206133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.206179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.206226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.206272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.206320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.408 [2024-11-17 01:43:42.206374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:37.408 [2024-11-17 01:43:42.206402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.408 [2024-11-17 01:43:42.206421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.206467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.206514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.206561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.206607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.206654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.206711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.206757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.206818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.206867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.206913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.206959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.206986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:82080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.207524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.207731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.207751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:82136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.210352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.210412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:82200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.210460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.210524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.210571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.210617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.210663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.210710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.210756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.210817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:82400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.409 [2024-11-17 01:43:42.210865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.409 [2024-11-17 01:43:42.210911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:37.409 [2024-11-17 01:43:42.210938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.210957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.210984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:81384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.211738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.211957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.211985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.212020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.212067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:82312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.212114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.212176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.212223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.212269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.212315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.212361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.212417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.212444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.212463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.213953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.213989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.214047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.214095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.214158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.214208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.410 [2024-11-17 01:43:42.214257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.214305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.214352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.410 [2024-11-17 01:43:42.214399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:37.410 [2024-11-17 01:43:42.214427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.214461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.214522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.214569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.214615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.214661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:81968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.214707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.214753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.214814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.214904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.214951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.214978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.214998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.215025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.215044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.215070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.215090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.215117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.215147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.216486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.216842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.216893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.216939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.216984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:82632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.217008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.217067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.217118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.217164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.217210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.217257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.217303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.217349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.217376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.217396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.218392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.218425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.218460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.218482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.218510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.411 [2024-11-17 01:43:42.218529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.218556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.218576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.218603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.218623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:37.411 [2024-11-17 01:43:42.218662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.411 [2024-11-17 01:43:42.218683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.218710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.218729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.218756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.218775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.218815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.218838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.218865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.218885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.218912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.218931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.218958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.218977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.219046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.219102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.219149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.219195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.219241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.219299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.219346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.219373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.219393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.220450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.220507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.220554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.220600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.220646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.220692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.220738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.220783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.220845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.220908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.220957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.220984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.221003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.221050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.221096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.221165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.221213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.221259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.412 [2024-11-17 01:43:42.221305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.221332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.221365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:37.412 [2024-11-17 01:43:42.222493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:82208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.412 [2024-11-17 01:43:42.222530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.222607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.222670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.222720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.222768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.222831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:82640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.222917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:82672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.222968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.222997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.223118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.223183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.223308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.223704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.223767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:82712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.223924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:82752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.223946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.224005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.224027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.224056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.224077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.224135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.224170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.224197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.224216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.226473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.226572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.226621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.226682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.226728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:83176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.226773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.226873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:82688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.226926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.226954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:82744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.226974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.227003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:83200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:37.413 [2024-11-17 01:43:42.227024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.227052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:82800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.227073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:37.413 [2024-11-17 01:43:42.227101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:82832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:37.413 [2024-11-17 01:43:42.227122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:37.413 7846.03 IOPS, 30.65 MiB/s [2024-11-17T01:43:45.872Z] 7870.34 IOPS, 30.74 MiB/s [2024-11-17T01:43:45.872Z] 7887.12 IOPS, 30.81 MiB/s [2024-11-17T01:43:45.872Z] Received shutdown signal, test time was about 33.134862 seconds 00:23:37.413 00:23:37.413 Latency(us) 00:23:37.413 [2024-11-17T01:43:45.872Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.413 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:37.413 Verification LBA range: start 0x0 length 0x4000 00:23:37.413 Nvme0n1 : 33.13 7886.60 30.81 0.00 0.00 16198.63 212.25 4026531.84 00:23:37.413 [2024-11-17T01:43:45.872Z] =================================================================================================================== 00:23:37.413 [2024-11-17T01:43:45.872Z] Total : 7886.60 30.81 0.00 0.00 16198.63 212.25 4026531.84 00:23:37.414 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:37.414 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:37.414 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:23:37.414 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:37.414 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:37.414 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:37.673 rmmod nvme_tcp 00:23:37.673 rmmod nvme_fabrics 00:23:37.673 rmmod nvme_keyring 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 82388 ']' 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 82388 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 82388 ']' 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 82388 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82388 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:37.673 killing process with pid 82388 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82388' 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 82388 00:23:37.673 01:43:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 82388 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:38.611 01:43:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:38.611 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:38.611 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:38.611 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@300 -- # return 0 00:23:38.870 00:23:38.870 real 0m40.849s 00:23:38.870 user 2m10.437s 00:23:38.870 sys 0m10.627s 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:38.870 ************************************ 00:23:38.870 END TEST nvmf_host_multipath_status 00:23:38.870 ************************************ 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.870 ************************************ 00:23:38.870 START TEST nvmf_discovery_remove_ifc 00:23:38.870 ************************************ 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:38.870 * Looking for test storage... 00:23:38.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:38.870 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:39.141 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.142 --rc genhtml_branch_coverage=1 00:23:39.142 --rc genhtml_function_coverage=1 00:23:39.142 --rc genhtml_legend=1 00:23:39.142 --rc geninfo_all_blocks=1 00:23:39.142 --rc geninfo_unexecuted_blocks=1 00:23:39.142 00:23:39.142 ' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.142 --rc genhtml_branch_coverage=1 00:23:39.142 --rc genhtml_function_coverage=1 00:23:39.142 --rc genhtml_legend=1 00:23:39.142 --rc geninfo_all_blocks=1 00:23:39.142 --rc geninfo_unexecuted_blocks=1 00:23:39.142 00:23:39.142 ' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.142 --rc genhtml_branch_coverage=1 00:23:39.142 --rc genhtml_function_coverage=1 00:23:39.142 --rc genhtml_legend=1 00:23:39.142 --rc geninfo_all_blocks=1 00:23:39.142 --rc geninfo_unexecuted_blocks=1 00:23:39.142 00:23:39.142 ' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:39.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:39.142 --rc genhtml_branch_coverage=1 00:23:39.142 --rc genhtml_function_coverage=1 00:23:39.142 --rc genhtml_legend=1 00:23:39.142 --rc geninfo_all_blocks=1 00:23:39.142 --rc geninfo_unexecuted_blocks=1 00:23:39.142 00:23:39.142 ' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:39.142 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:39.142 Cannot find device "nvmf_init_br" 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # true 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:39.142 Cannot find device "nvmf_init_br2" 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # true 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:39.142 Cannot find device "nvmf_tgt_br" 00:23:39.142 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@164 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:39.143 Cannot find device "nvmf_tgt_br2" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@165 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:39.143 Cannot find device "nvmf_init_br" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@166 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:39.143 Cannot find device "nvmf_init_br2" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@167 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:39.143 Cannot find device "nvmf_tgt_br" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@168 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:39.143 Cannot find device "nvmf_tgt_br2" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:39.143 Cannot find device "nvmf_br" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@170 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:39.143 Cannot find device "nvmf_init_if" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:39.143 Cannot find device "nvmf_init_if2" 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:39.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@173 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:39.143 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@174 -- # true 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:39.143 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:39.401 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:39.402 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:39.402 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.056 ms 00:23:39.402 00:23:39.402 --- 10.0.0.3 ping statistics --- 00:23:39.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.402 rtt min/avg/max/mdev = 0.056/0.056/0.056/0.000 ms 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:39.402 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:39.402 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.041 ms 00:23:39.402 00:23:39.402 --- 10.0.0.4 ping statistics --- 00:23:39.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.402 rtt min/avg/max/mdev = 0.041/0.041/0.041/0.000 ms 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:39.402 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.402 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:23:39.402 00:23:39.402 --- 10.0.0.1 ping statistics --- 00:23:39.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.402 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:39.402 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.402 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:23:39.402 00:23:39.402 --- 10.0.0.2 ping statistics --- 00:23:39.402 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.402 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@461 -- # return 0 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=83288 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 83288 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83288 ']' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:39.402 01:43:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:39.661 [2024-11-17 01:43:47.933487] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:39.661 [2024-11-17 01:43:47.933650] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.661 [2024-11-17 01:43:48.114052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.919 [2024-11-17 01:43:48.196490] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.919 [2024-11-17 01:43:48.196550] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.919 [2024-11-17 01:43:48.196583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.919 [2024-11-17 01:43:48.196604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.919 [2024-11-17 01:43:48.196617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.919 [2024-11-17 01:43:48.197695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.919 [2024-11-17 01:43:48.349705] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:40.487 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:40.488 01:43:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.761 [2024-11-17 01:43:48.958880] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.761 [2024-11-17 01:43:48.967116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:23:40.761 null0 00:23:40.761 [2024-11-17 01:43:48.998968] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=83320 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 83320 /tmp/host.sock 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 83320 ']' 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:40.761 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:40.761 01:43:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:40.761 [2024-11-17 01:43:49.139372] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:23:40.761 [2024-11-17 01:43:49.139572] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83320 ] 00:23:41.033 [2024-11-17 01:43:49.326402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.033 [2024-11-17 01:43:49.450544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:41.970 [2024-11-17 01:43:50.274443] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.3 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:41.970 01:43:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.348 [2024-11-17 01:43:51.383668] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:43.348 [2024-11-17 01:43:51.383713] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:43.348 [2024-11-17 01:43:51.383769] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:43.348 [2024-11-17 01:43:51.389702] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme0 00:23:43.348 [2024-11-17 01:43:51.452420] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.3:4420 00:23:43.348 [2024-11-17 01:43:51.453773] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x61500002b500:1 started. 00:23:43.348 [2024-11-17 01:43:51.455804] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:43.348 [2024-11-17 01:43:51.455911] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:43.348 [2024-11-17 01:43:51.455989] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:43.348 [2024-11-17 01:43:51.456043] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme0 done 00:23:43.348 [2024-11-17 01:43:51.456075] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.348 [2024-11-17 01:43:51.462609] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x61500002b500 was disconnected and freed. delete nvme_qpair. 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.3/24 dev nvmf_tgt_if 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:43.348 01:43:51 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:44.287 01:43:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:45.224 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:45.483 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:45.483 01:43:53 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:46.419 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:46.420 01:43:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:47.356 01:43:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:48.733 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:48.734 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.734 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:48.734 01:43:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:48.734 [2024-11-17 01:43:56.883360] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:48.734 [2024-11-17 01:43:56.883445] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.734 [2024-11-17 01:43:56.883465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.734 [2024-11-17 01:43:56.883482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.734 [2024-11-17 01:43:56.883493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.734 [2024-11-17 01:43:56.883505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.734 [2024-11-17 01:43:56.883517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.734 [2024-11-17 01:43:56.883528] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.734 [2024-11-17 01:43:56.883539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.734 [2024-11-17 01:43:56.883551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.734 [2024-11-17 01:43:56.883562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.734 [2024-11-17 01:43:56.883574] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:48.734 [2024-11-17 01:43:56.893351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:48.734 [2024-11-17 01:43:56.903371] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:23:48.734 [2024-11-17 01:43:56.903403] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:23:48.734 [2024-11-17 01:43:56.903413] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:48.734 [2024-11-17 01:43:56.903428] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:48.734 [2024-11-17 01:43:56.903500] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:49.671 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:49.671 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:49.671 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:49.671 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:49.671 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:49.672 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:49.672 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:49.672 [2024-11-17 01:43:57.948886] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 110 00:23:49.672 [2024-11-17 01:43:57.949006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b000 with addr=10.0.0.3, port=4420 00:23:49.672 [2024-11-17 01:43:57.949059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b000 is same with the state(6) to be set 00:23:49.672 [2024-11-17 01:43:57.949125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b000 (9): Bad file descriptor 00:23:49.672 [2024-11-17 01:43:57.950090] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:23:49.672 [2024-11-17 01:43:57.950234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:49.672 [2024-11-17 01:43:57.950269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:49.672 [2024-11-17 01:43:57.950303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:49.672 [2024-11-17 01:43:57.950327] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:49.672 [2024-11-17 01:43:57.950345] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:49.672 [2024-11-17 01:43:57.950360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:49.672 [2024-11-17 01:43:57.950383] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:23:49.672 [2024-11-17 01:43:57.950405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:23:49.672 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:49.672 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:49.672 01:43:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:50.609 [2024-11-17 01:43:58.950487] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:23:50.609 [2024-11-17 01:43:58.950526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:23:50.609 [2024-11-17 01:43:58.950554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:23:50.609 [2024-11-17 01:43:58.950583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:23:50.609 [2024-11-17 01:43:58.950595] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:23:50.609 [2024-11-17 01:43:58.950607] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:23:50.609 [2024-11-17 01:43:58.950616] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:23:50.609 [2024-11-17 01:43:58.950623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:23:50.609 [2024-11-17 01:43:58.950669] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.3:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 00:23:50.609 [2024-11-17 01:43:58.950716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.609 [2024-11-17 01:43:58.950736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.609 [2024-11-17 01:43:58.950759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.609 [2024-11-17 01:43:58.950772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.609 [2024-11-17 01:43:58.950784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.609 [2024-11-17 01:43:58.950795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.609 [2024-11-17 01:43:58.950826] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.609 [2024-11-17 01:43:58.950856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.609 [2024-11-17 01:43:58.950871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:50.609 [2024-11-17 01:43:58.950883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:50.609 [2024-11-17 01:43:58.950923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:23:50.609 [2024-11-17 01:43:58.951427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:23:50.609 [2024-11-17 01:43:58.952459] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:50.609 [2024-11-17 01:43:58.952498] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.609 01:43:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:50.609 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.868 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:50.868 01:43:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:51.803 01:44:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:52.739 [2024-11-17 01:44:00.958377] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:23:52.739 [2024-11-17 01:44:00.958407] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:23:52.739 [2024-11-17 01:44:00.958436] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:23:52.739 [2024-11-17 01:44:00.964434] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 new subsystem nvme1 00:23:52.739 [2024-11-17 01:44:01.019104] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.3:4420 00:23:52.739 [2024-11-17 01:44:01.020324] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x61500002c180:1 started. 00:23:52.739 [2024-11-17 01:44:01.022416] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:52.739 [2024-11-17 01:44:01.022619] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:52.739 [2024-11-17 01:44:01.022719] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:52.739 [2024-11-17 01:44:01.022835] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach nvme1 done 00:23:52.739 [2024-11-17 01:44:01.022977] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.3:4420 found again 00:23:52.739 [2024-11-17 01:44:01.027277] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x61500002c180 was disconnected and freed. delete nvme_qpair. 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:52.739 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 83320 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83320 ']' 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83320 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83320 00:23:52.999 killing process with pid 83320 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83320' 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83320 00:23:52.999 01:44:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83320 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:53.936 rmmod nvme_tcp 00:23:53.936 rmmod nvme_fabrics 00:23:53.936 rmmod nvme_keyring 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 83288 ']' 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 83288 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 83288 ']' 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 83288 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83288 00:23:53.936 killing process with pid 83288 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83288' 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 83288 00:23:53.936 01:44:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 83288 00:23:54.874 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:54.874 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:54.874 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@300 -- # return 0 00:23:54.875 ************************************ 00:23:54.875 END TEST nvmf_discovery_remove_ifc 00:23:54.875 00:23:54.875 real 0m16.081s 00:23:54.875 user 0m27.260s 00:23:54.875 sys 0m2.502s 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:54.875 ************************************ 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.875 ************************************ 00:23:54.875 START TEST nvmf_identify_kernel_target 00:23:54.875 ************************************ 00:23:54.875 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:55.134 * Looking for test storage... 00:23:55.134 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:55.134 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.135 --rc genhtml_branch_coverage=1 00:23:55.135 --rc genhtml_function_coverage=1 00:23:55.135 --rc genhtml_legend=1 00:23:55.135 --rc geninfo_all_blocks=1 00:23:55.135 --rc geninfo_unexecuted_blocks=1 00:23:55.135 00:23:55.135 ' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.135 --rc genhtml_branch_coverage=1 00:23:55.135 --rc genhtml_function_coverage=1 00:23:55.135 --rc genhtml_legend=1 00:23:55.135 --rc geninfo_all_blocks=1 00:23:55.135 --rc geninfo_unexecuted_blocks=1 00:23:55.135 00:23:55.135 ' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.135 --rc genhtml_branch_coverage=1 00:23:55.135 --rc genhtml_function_coverage=1 00:23:55.135 --rc genhtml_legend=1 00:23:55.135 --rc geninfo_all_blocks=1 00:23:55.135 --rc geninfo_unexecuted_blocks=1 00:23:55.135 00:23:55.135 ' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:55.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:55.135 --rc genhtml_branch_coverage=1 00:23:55.135 --rc genhtml_function_coverage=1 00:23:55.135 --rc genhtml_legend=1 00:23:55.135 --rc geninfo_all_blocks=1 00:23:55.135 --rc geninfo_unexecuted_blocks=1 00:23:55.135 00:23:55.135 ' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:55.135 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:55.135 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:55.136 Cannot find device "nvmf_init_br" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:55.136 Cannot find device "nvmf_init_br2" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:55.136 Cannot find device "nvmf_tgt_br" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@164 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:55.136 Cannot find device "nvmf_tgt_br2" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@165 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:55.136 Cannot find device "nvmf_init_br" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@166 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:55.136 Cannot find device "nvmf_init_br2" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@167 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:55.136 Cannot find device "nvmf_tgt_br" 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@168 -- # true 00:23:55.136 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:55.396 Cannot find device "nvmf_tgt_br2" 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # true 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:55.396 Cannot find device "nvmf_br" 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@170 -- # true 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:55.396 Cannot find device "nvmf_init_if" 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # true 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:55.396 Cannot find device "nvmf_init_if2" 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # true 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:55.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@173 -- # true 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:55.396 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@174 -- # true 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:55.396 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:55.656 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:55.656 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:23:55.656 00:23:55.656 --- 10.0.0.3 ping statistics --- 00:23:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.656 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:55.656 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:55.656 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.072 ms 00:23:55.656 00:23:55.656 --- 10.0.0.4 ping statistics --- 00:23:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.656 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:55.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.030 ms 00:23:55.656 00:23:55.656 --- 10.0.0.1 ping statistics --- 00:23:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.656 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:55.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:23:55.656 00:23:55.656 --- 10.0.0.2 ping statistics --- 00:23:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.656 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # return 0 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.656 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:55.657 01:44:03 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:23:55.916 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:55.916 Waiting for block devices as requested 00:23:55.916 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:23:56.175 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:23:56.175 No valid GPT data, bailing 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:23:56.175 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:23:56.434 No valid GPT data, bailing 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:23:56.434 No valid GPT data, bailing 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:23:56.434 No valid GPT data, bailing 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:23:56.434 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:23:56.435 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:23:56.435 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:56.435 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -a 10.0.0.1 -t tcp -s 4420 00:23:56.435 00:23:56.435 Discovery Log Number of Records 2, Generation counter 2 00:23:56.435 =====Discovery Log Entry 0====== 00:23:56.435 trtype: tcp 00:23:56.435 adrfam: ipv4 00:23:56.435 subtype: current discovery subsystem 00:23:56.435 treq: not specified, sq flow control disable supported 00:23:56.435 portid: 1 00:23:56.435 trsvcid: 4420 00:23:56.435 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:56.435 traddr: 10.0.0.1 00:23:56.435 eflags: none 00:23:56.435 sectype: none 00:23:56.435 =====Discovery Log Entry 1====== 00:23:56.435 trtype: tcp 00:23:56.435 adrfam: ipv4 00:23:56.435 subtype: nvme subsystem 00:23:56.435 treq: not specified, sq flow control disable supported 00:23:56.435 portid: 1 00:23:56.435 trsvcid: 4420 00:23:56.435 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:56.435 traddr: 10.0.0.1 00:23:56.435 eflags: none 00:23:56.435 sectype: none 00:23:56.435 01:44:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:56.435 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:56.694 ===================================================== 00:23:56.694 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:56.694 ===================================================== 00:23:56.694 Controller Capabilities/Features 00:23:56.694 ================================ 00:23:56.694 Vendor ID: 0000 00:23:56.694 Subsystem Vendor ID: 0000 00:23:56.694 Serial Number: 2f23192e5370f7b34d18 00:23:56.694 Model Number: Linux 00:23:56.694 Firmware Version: 6.8.9-20 00:23:56.694 Recommended Arb Burst: 0 00:23:56.694 IEEE OUI Identifier: 00 00 00 00:23:56.694 Multi-path I/O 00:23:56.694 May have multiple subsystem ports: No 00:23:56.694 May have multiple controllers: No 00:23:56.694 Associated with SR-IOV VF: No 00:23:56.694 Max Data Transfer Size: Unlimited 00:23:56.694 Max Number of Namespaces: 0 00:23:56.694 Max Number of I/O Queues: 1024 00:23:56.694 NVMe Specification Version (VS): 1.3 00:23:56.694 NVMe Specification Version (Identify): 1.3 00:23:56.695 Maximum Queue Entries: 1024 00:23:56.695 Contiguous Queues Required: No 00:23:56.695 Arbitration Mechanisms Supported 00:23:56.695 Weighted Round Robin: Not Supported 00:23:56.695 Vendor Specific: Not Supported 00:23:56.695 Reset Timeout: 7500 ms 00:23:56.695 Doorbell Stride: 4 bytes 00:23:56.695 NVM Subsystem Reset: Not Supported 00:23:56.695 Command Sets Supported 00:23:56.695 NVM Command Set: Supported 00:23:56.695 Boot Partition: Not Supported 00:23:56.695 Memory Page Size Minimum: 4096 bytes 00:23:56.695 Memory Page Size Maximum: 4096 bytes 00:23:56.695 Persistent Memory Region: Not Supported 00:23:56.695 Optional Asynchronous Events Supported 00:23:56.695 Namespace Attribute Notices: Not Supported 00:23:56.695 Firmware Activation Notices: Not Supported 00:23:56.695 ANA Change Notices: Not Supported 00:23:56.695 PLE Aggregate Log Change Notices: Not Supported 00:23:56.695 LBA Status Info Alert Notices: Not Supported 00:23:56.695 EGE Aggregate Log Change Notices: Not Supported 00:23:56.695 Normal NVM Subsystem Shutdown event: Not Supported 00:23:56.695 Zone Descriptor Change Notices: Not Supported 00:23:56.695 Discovery Log Change Notices: Supported 00:23:56.695 Controller Attributes 00:23:56.695 128-bit Host Identifier: Not Supported 00:23:56.695 Non-Operational Permissive Mode: Not Supported 00:23:56.695 NVM Sets: Not Supported 00:23:56.695 Read Recovery Levels: Not Supported 00:23:56.695 Endurance Groups: Not Supported 00:23:56.695 Predictable Latency Mode: Not Supported 00:23:56.695 Traffic Based Keep ALive: Not Supported 00:23:56.695 Namespace Granularity: Not Supported 00:23:56.695 SQ Associations: Not Supported 00:23:56.695 UUID List: Not Supported 00:23:56.695 Multi-Domain Subsystem: Not Supported 00:23:56.695 Fixed Capacity Management: Not Supported 00:23:56.695 Variable Capacity Management: Not Supported 00:23:56.695 Delete Endurance Group: Not Supported 00:23:56.695 Delete NVM Set: Not Supported 00:23:56.695 Extended LBA Formats Supported: Not Supported 00:23:56.695 Flexible Data Placement Supported: Not Supported 00:23:56.695 00:23:56.695 Controller Memory Buffer Support 00:23:56.695 ================================ 00:23:56.695 Supported: No 00:23:56.695 00:23:56.695 Persistent Memory Region Support 00:23:56.695 ================================ 00:23:56.695 Supported: No 00:23:56.695 00:23:56.695 Admin Command Set Attributes 00:23:56.695 ============================ 00:23:56.695 Security Send/Receive: Not Supported 00:23:56.695 Format NVM: Not Supported 00:23:56.695 Firmware Activate/Download: Not Supported 00:23:56.695 Namespace Management: Not Supported 00:23:56.695 Device Self-Test: Not Supported 00:23:56.695 Directives: Not Supported 00:23:56.695 NVMe-MI: Not Supported 00:23:56.695 Virtualization Management: Not Supported 00:23:56.695 Doorbell Buffer Config: Not Supported 00:23:56.695 Get LBA Status Capability: Not Supported 00:23:56.695 Command & Feature Lockdown Capability: Not Supported 00:23:56.695 Abort Command Limit: 1 00:23:56.695 Async Event Request Limit: 1 00:23:56.695 Number of Firmware Slots: N/A 00:23:56.695 Firmware Slot 1 Read-Only: N/A 00:23:56.954 Firmware Activation Without Reset: N/A 00:23:56.954 Multiple Update Detection Support: N/A 00:23:56.954 Firmware Update Granularity: No Information Provided 00:23:56.954 Per-Namespace SMART Log: No 00:23:56.954 Asymmetric Namespace Access Log Page: Not Supported 00:23:56.954 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:56.954 Command Effects Log Page: Not Supported 00:23:56.954 Get Log Page Extended Data: Supported 00:23:56.954 Telemetry Log Pages: Not Supported 00:23:56.954 Persistent Event Log Pages: Not Supported 00:23:56.954 Supported Log Pages Log Page: May Support 00:23:56.954 Commands Supported & Effects Log Page: Not Supported 00:23:56.954 Feature Identifiers & Effects Log Page:May Support 00:23:56.954 NVMe-MI Commands & Effects Log Page: May Support 00:23:56.954 Data Area 4 for Telemetry Log: Not Supported 00:23:56.954 Error Log Page Entries Supported: 1 00:23:56.954 Keep Alive: Not Supported 00:23:56.954 00:23:56.954 NVM Command Set Attributes 00:23:56.954 ========================== 00:23:56.954 Submission Queue Entry Size 00:23:56.954 Max: 1 00:23:56.955 Min: 1 00:23:56.955 Completion Queue Entry Size 00:23:56.955 Max: 1 00:23:56.955 Min: 1 00:23:56.955 Number of Namespaces: 0 00:23:56.955 Compare Command: Not Supported 00:23:56.955 Write Uncorrectable Command: Not Supported 00:23:56.955 Dataset Management Command: Not Supported 00:23:56.955 Write Zeroes Command: Not Supported 00:23:56.955 Set Features Save Field: Not Supported 00:23:56.955 Reservations: Not Supported 00:23:56.955 Timestamp: Not Supported 00:23:56.955 Copy: Not Supported 00:23:56.955 Volatile Write Cache: Not Present 00:23:56.955 Atomic Write Unit (Normal): 1 00:23:56.955 Atomic Write Unit (PFail): 1 00:23:56.955 Atomic Compare & Write Unit: 1 00:23:56.955 Fused Compare & Write: Not Supported 00:23:56.955 Scatter-Gather List 00:23:56.955 SGL Command Set: Supported 00:23:56.955 SGL Keyed: Not Supported 00:23:56.955 SGL Bit Bucket Descriptor: Not Supported 00:23:56.955 SGL Metadata Pointer: Not Supported 00:23:56.955 Oversized SGL: Not Supported 00:23:56.955 SGL Metadata Address: Not Supported 00:23:56.955 SGL Offset: Supported 00:23:56.955 Transport SGL Data Block: Not Supported 00:23:56.955 Replay Protected Memory Block: Not Supported 00:23:56.955 00:23:56.955 Firmware Slot Information 00:23:56.955 ========================= 00:23:56.955 Active slot: 0 00:23:56.955 00:23:56.955 00:23:56.955 Error Log 00:23:56.955 ========= 00:23:56.955 00:23:56.955 Active Namespaces 00:23:56.955 ================= 00:23:56.955 Discovery Log Page 00:23:56.955 ================== 00:23:56.955 Generation Counter: 2 00:23:56.955 Number of Records: 2 00:23:56.955 Record Format: 0 00:23:56.955 00:23:56.955 Discovery Log Entry 0 00:23:56.955 ---------------------- 00:23:56.955 Transport Type: 3 (TCP) 00:23:56.955 Address Family: 1 (IPv4) 00:23:56.955 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:56.955 Entry Flags: 00:23:56.955 Duplicate Returned Information: 0 00:23:56.955 Explicit Persistent Connection Support for Discovery: 0 00:23:56.955 Transport Requirements: 00:23:56.955 Secure Channel: Not Specified 00:23:56.955 Port ID: 1 (0x0001) 00:23:56.955 Controller ID: 65535 (0xffff) 00:23:56.955 Admin Max SQ Size: 32 00:23:56.955 Transport Service Identifier: 4420 00:23:56.955 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:56.955 Transport Address: 10.0.0.1 00:23:56.955 Discovery Log Entry 1 00:23:56.955 ---------------------- 00:23:56.955 Transport Type: 3 (TCP) 00:23:56.955 Address Family: 1 (IPv4) 00:23:56.955 Subsystem Type: 2 (NVM Subsystem) 00:23:56.955 Entry Flags: 00:23:56.955 Duplicate Returned Information: 0 00:23:56.955 Explicit Persistent Connection Support for Discovery: 0 00:23:56.955 Transport Requirements: 00:23:56.955 Secure Channel: Not Specified 00:23:56.955 Port ID: 1 (0x0001) 00:23:56.955 Controller ID: 65535 (0xffff) 00:23:56.955 Admin Max SQ Size: 32 00:23:56.955 Transport Service Identifier: 4420 00:23:56.955 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:56.955 Transport Address: 10.0.0.1 00:23:56.955 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:57.215 get_feature(0x01) failed 00:23:57.215 get_feature(0x02) failed 00:23:57.215 get_feature(0x04) failed 00:23:57.215 ===================================================== 00:23:57.215 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:57.215 ===================================================== 00:23:57.215 Controller Capabilities/Features 00:23:57.215 ================================ 00:23:57.215 Vendor ID: 0000 00:23:57.215 Subsystem Vendor ID: 0000 00:23:57.215 Serial Number: 785695a9d0c9d6663920 00:23:57.215 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:57.215 Firmware Version: 6.8.9-20 00:23:57.215 Recommended Arb Burst: 6 00:23:57.215 IEEE OUI Identifier: 00 00 00 00:23:57.215 Multi-path I/O 00:23:57.215 May have multiple subsystem ports: Yes 00:23:57.215 May have multiple controllers: Yes 00:23:57.215 Associated with SR-IOV VF: No 00:23:57.215 Max Data Transfer Size: Unlimited 00:23:57.215 Max Number of Namespaces: 1024 00:23:57.215 Max Number of I/O Queues: 128 00:23:57.215 NVMe Specification Version (VS): 1.3 00:23:57.215 NVMe Specification Version (Identify): 1.3 00:23:57.215 Maximum Queue Entries: 1024 00:23:57.215 Contiguous Queues Required: No 00:23:57.215 Arbitration Mechanisms Supported 00:23:57.215 Weighted Round Robin: Not Supported 00:23:57.215 Vendor Specific: Not Supported 00:23:57.215 Reset Timeout: 7500 ms 00:23:57.215 Doorbell Stride: 4 bytes 00:23:57.215 NVM Subsystem Reset: Not Supported 00:23:57.215 Command Sets Supported 00:23:57.215 NVM Command Set: Supported 00:23:57.215 Boot Partition: Not Supported 00:23:57.215 Memory Page Size Minimum: 4096 bytes 00:23:57.215 Memory Page Size Maximum: 4096 bytes 00:23:57.215 Persistent Memory Region: Not Supported 00:23:57.215 Optional Asynchronous Events Supported 00:23:57.215 Namespace Attribute Notices: Supported 00:23:57.215 Firmware Activation Notices: Not Supported 00:23:57.215 ANA Change Notices: Supported 00:23:57.215 PLE Aggregate Log Change Notices: Not Supported 00:23:57.215 LBA Status Info Alert Notices: Not Supported 00:23:57.215 EGE Aggregate Log Change Notices: Not Supported 00:23:57.215 Normal NVM Subsystem Shutdown event: Not Supported 00:23:57.215 Zone Descriptor Change Notices: Not Supported 00:23:57.215 Discovery Log Change Notices: Not Supported 00:23:57.215 Controller Attributes 00:23:57.215 128-bit Host Identifier: Supported 00:23:57.215 Non-Operational Permissive Mode: Not Supported 00:23:57.215 NVM Sets: Not Supported 00:23:57.215 Read Recovery Levels: Not Supported 00:23:57.215 Endurance Groups: Not Supported 00:23:57.215 Predictable Latency Mode: Not Supported 00:23:57.215 Traffic Based Keep ALive: Supported 00:23:57.215 Namespace Granularity: Not Supported 00:23:57.215 SQ Associations: Not Supported 00:23:57.215 UUID List: Not Supported 00:23:57.215 Multi-Domain Subsystem: Not Supported 00:23:57.215 Fixed Capacity Management: Not Supported 00:23:57.215 Variable Capacity Management: Not Supported 00:23:57.215 Delete Endurance Group: Not Supported 00:23:57.215 Delete NVM Set: Not Supported 00:23:57.215 Extended LBA Formats Supported: Not Supported 00:23:57.215 Flexible Data Placement Supported: Not Supported 00:23:57.215 00:23:57.215 Controller Memory Buffer Support 00:23:57.215 ================================ 00:23:57.215 Supported: No 00:23:57.215 00:23:57.215 Persistent Memory Region Support 00:23:57.215 ================================ 00:23:57.215 Supported: No 00:23:57.215 00:23:57.215 Admin Command Set Attributes 00:23:57.215 ============================ 00:23:57.215 Security Send/Receive: Not Supported 00:23:57.215 Format NVM: Not Supported 00:23:57.215 Firmware Activate/Download: Not Supported 00:23:57.215 Namespace Management: Not Supported 00:23:57.215 Device Self-Test: Not Supported 00:23:57.215 Directives: Not Supported 00:23:57.215 NVMe-MI: Not Supported 00:23:57.215 Virtualization Management: Not Supported 00:23:57.215 Doorbell Buffer Config: Not Supported 00:23:57.215 Get LBA Status Capability: Not Supported 00:23:57.215 Command & Feature Lockdown Capability: Not Supported 00:23:57.215 Abort Command Limit: 4 00:23:57.215 Async Event Request Limit: 4 00:23:57.215 Number of Firmware Slots: N/A 00:23:57.215 Firmware Slot 1 Read-Only: N/A 00:23:57.215 Firmware Activation Without Reset: N/A 00:23:57.215 Multiple Update Detection Support: N/A 00:23:57.215 Firmware Update Granularity: No Information Provided 00:23:57.215 Per-Namespace SMART Log: Yes 00:23:57.215 Asymmetric Namespace Access Log Page: Supported 00:23:57.215 ANA Transition Time : 10 sec 00:23:57.215 00:23:57.215 Asymmetric Namespace Access Capabilities 00:23:57.215 ANA Optimized State : Supported 00:23:57.215 ANA Non-Optimized State : Supported 00:23:57.215 ANA Inaccessible State : Supported 00:23:57.215 ANA Persistent Loss State : Supported 00:23:57.215 ANA Change State : Supported 00:23:57.215 ANAGRPID is not changed : No 00:23:57.215 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:57.215 00:23:57.216 ANA Group Identifier Maximum : 128 00:23:57.216 Number of ANA Group Identifiers : 128 00:23:57.216 Max Number of Allowed Namespaces : 1024 00:23:57.216 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:57.216 Command Effects Log Page: Supported 00:23:57.216 Get Log Page Extended Data: Supported 00:23:57.216 Telemetry Log Pages: Not Supported 00:23:57.216 Persistent Event Log Pages: Not Supported 00:23:57.216 Supported Log Pages Log Page: May Support 00:23:57.216 Commands Supported & Effects Log Page: Not Supported 00:23:57.216 Feature Identifiers & Effects Log Page:May Support 00:23:57.216 NVMe-MI Commands & Effects Log Page: May Support 00:23:57.216 Data Area 4 for Telemetry Log: Not Supported 00:23:57.216 Error Log Page Entries Supported: 128 00:23:57.216 Keep Alive: Supported 00:23:57.216 Keep Alive Granularity: 1000 ms 00:23:57.216 00:23:57.216 NVM Command Set Attributes 00:23:57.216 ========================== 00:23:57.216 Submission Queue Entry Size 00:23:57.216 Max: 64 00:23:57.216 Min: 64 00:23:57.216 Completion Queue Entry Size 00:23:57.216 Max: 16 00:23:57.216 Min: 16 00:23:57.216 Number of Namespaces: 1024 00:23:57.216 Compare Command: Not Supported 00:23:57.216 Write Uncorrectable Command: Not Supported 00:23:57.216 Dataset Management Command: Supported 00:23:57.216 Write Zeroes Command: Supported 00:23:57.216 Set Features Save Field: Not Supported 00:23:57.216 Reservations: Not Supported 00:23:57.216 Timestamp: Not Supported 00:23:57.216 Copy: Not Supported 00:23:57.216 Volatile Write Cache: Present 00:23:57.216 Atomic Write Unit (Normal): 1 00:23:57.216 Atomic Write Unit (PFail): 1 00:23:57.216 Atomic Compare & Write Unit: 1 00:23:57.216 Fused Compare & Write: Not Supported 00:23:57.216 Scatter-Gather List 00:23:57.216 SGL Command Set: Supported 00:23:57.216 SGL Keyed: Not Supported 00:23:57.216 SGL Bit Bucket Descriptor: Not Supported 00:23:57.216 SGL Metadata Pointer: Not Supported 00:23:57.216 Oversized SGL: Not Supported 00:23:57.216 SGL Metadata Address: Not Supported 00:23:57.216 SGL Offset: Supported 00:23:57.216 Transport SGL Data Block: Not Supported 00:23:57.216 Replay Protected Memory Block: Not Supported 00:23:57.216 00:23:57.216 Firmware Slot Information 00:23:57.216 ========================= 00:23:57.216 Active slot: 0 00:23:57.216 00:23:57.216 Asymmetric Namespace Access 00:23:57.216 =========================== 00:23:57.216 Change Count : 0 00:23:57.216 Number of ANA Group Descriptors : 1 00:23:57.216 ANA Group Descriptor : 0 00:23:57.216 ANA Group ID : 1 00:23:57.216 Number of NSID Values : 1 00:23:57.216 Change Count : 0 00:23:57.216 ANA State : 1 00:23:57.216 Namespace Identifier : 1 00:23:57.216 00:23:57.216 Commands Supported and Effects 00:23:57.216 ============================== 00:23:57.216 Admin Commands 00:23:57.216 -------------- 00:23:57.216 Get Log Page (02h): Supported 00:23:57.216 Identify (06h): Supported 00:23:57.216 Abort (08h): Supported 00:23:57.216 Set Features (09h): Supported 00:23:57.216 Get Features (0Ah): Supported 00:23:57.216 Asynchronous Event Request (0Ch): Supported 00:23:57.216 Keep Alive (18h): Supported 00:23:57.216 I/O Commands 00:23:57.216 ------------ 00:23:57.216 Flush (00h): Supported 00:23:57.216 Write (01h): Supported LBA-Change 00:23:57.216 Read (02h): Supported 00:23:57.216 Write Zeroes (08h): Supported LBA-Change 00:23:57.216 Dataset Management (09h): Supported 00:23:57.216 00:23:57.216 Error Log 00:23:57.216 ========= 00:23:57.216 Entry: 0 00:23:57.216 Error Count: 0x3 00:23:57.216 Submission Queue Id: 0x0 00:23:57.216 Command Id: 0x5 00:23:57.216 Phase Bit: 0 00:23:57.216 Status Code: 0x2 00:23:57.216 Status Code Type: 0x0 00:23:57.216 Do Not Retry: 1 00:23:57.216 Error Location: 0x28 00:23:57.216 LBA: 0x0 00:23:57.216 Namespace: 0x0 00:23:57.216 Vendor Log Page: 0x0 00:23:57.216 ----------- 00:23:57.216 Entry: 1 00:23:57.216 Error Count: 0x2 00:23:57.216 Submission Queue Id: 0x0 00:23:57.216 Command Id: 0x5 00:23:57.216 Phase Bit: 0 00:23:57.216 Status Code: 0x2 00:23:57.216 Status Code Type: 0x0 00:23:57.216 Do Not Retry: 1 00:23:57.216 Error Location: 0x28 00:23:57.216 LBA: 0x0 00:23:57.216 Namespace: 0x0 00:23:57.216 Vendor Log Page: 0x0 00:23:57.216 ----------- 00:23:57.216 Entry: 2 00:23:57.216 Error Count: 0x1 00:23:57.216 Submission Queue Id: 0x0 00:23:57.216 Command Id: 0x4 00:23:57.216 Phase Bit: 0 00:23:57.216 Status Code: 0x2 00:23:57.216 Status Code Type: 0x0 00:23:57.216 Do Not Retry: 1 00:23:57.216 Error Location: 0x28 00:23:57.216 LBA: 0x0 00:23:57.216 Namespace: 0x0 00:23:57.216 Vendor Log Page: 0x0 00:23:57.216 00:23:57.216 Number of Queues 00:23:57.216 ================ 00:23:57.216 Number of I/O Submission Queues: 128 00:23:57.216 Number of I/O Completion Queues: 128 00:23:57.216 00:23:57.216 ZNS Specific Controller Data 00:23:57.216 ============================ 00:23:57.216 Zone Append Size Limit: 0 00:23:57.216 00:23:57.216 00:23:57.216 Active Namespaces 00:23:57.216 ================= 00:23:57.216 get_feature(0x05) failed 00:23:57.216 Namespace ID:1 00:23:57.216 Command Set Identifier: NVM (00h) 00:23:57.216 Deallocate: Supported 00:23:57.216 Deallocated/Unwritten Error: Not Supported 00:23:57.216 Deallocated Read Value: Unknown 00:23:57.216 Deallocate in Write Zeroes: Not Supported 00:23:57.216 Deallocated Guard Field: 0xFFFF 00:23:57.216 Flush: Supported 00:23:57.216 Reservation: Not Supported 00:23:57.216 Namespace Sharing Capabilities: Multiple Controllers 00:23:57.216 Size (in LBAs): 1310720 (5GiB) 00:23:57.216 Capacity (in LBAs): 1310720 (5GiB) 00:23:57.216 Utilization (in LBAs): 1310720 (5GiB) 00:23:57.216 UUID: c0b5bf37-fb5b-4f62-b1eb-ce53429ca8a1 00:23:57.216 Thin Provisioning: Not Supported 00:23:57.216 Per-NS Atomic Units: Yes 00:23:57.216 Atomic Boundary Size (Normal): 0 00:23:57.216 Atomic Boundary Size (PFail): 0 00:23:57.216 Atomic Boundary Offset: 0 00:23:57.216 NGUID/EUI64 Never Reused: No 00:23:57.216 ANA group ID: 1 00:23:57.216 Namespace Write Protected: No 00:23:57.216 Number of LBA Formats: 1 00:23:57.216 Current LBA Format: LBA Format #00 00:23:57.216 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:23:57.216 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:57.216 rmmod nvme_tcp 00:23:57.216 rmmod nvme_fabrics 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:23:57.216 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:23:57.217 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:23:57.217 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:23:57.217 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@246 -- # remove_spdk_ns 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@300 -- # return 0 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:23:57.476 01:44:05 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:23:58.414 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:23:58.414 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:23:58.414 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:23:58.414 00:23:58.414 real 0m3.450s 00:23:58.414 user 0m1.244s 00:23:58.414 sys 0m1.563s 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:58.414 ************************************ 00:23:58.414 END TEST nvmf_identify_kernel_target 00:23:58.414 ************************************ 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:58.414 ************************************ 00:23:58.414 START TEST nvmf_auth_host 00:23:58.414 ************************************ 00:23:58.414 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:58.675 * Looking for test storage... 00:23:58.675 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.675 --rc genhtml_branch_coverage=1 00:23:58.675 --rc genhtml_function_coverage=1 00:23:58.675 --rc genhtml_legend=1 00:23:58.675 --rc geninfo_all_blocks=1 00:23:58.675 --rc geninfo_unexecuted_blocks=1 00:23:58.675 00:23:58.675 ' 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.675 --rc genhtml_branch_coverage=1 00:23:58.675 --rc genhtml_function_coverage=1 00:23:58.675 --rc genhtml_legend=1 00:23:58.675 --rc geninfo_all_blocks=1 00:23:58.675 --rc geninfo_unexecuted_blocks=1 00:23:58.675 00:23:58.675 ' 00:23:58.675 01:44:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.675 --rc genhtml_branch_coverage=1 00:23:58.675 --rc genhtml_function_coverage=1 00:23:58.675 --rc genhtml_legend=1 00:23:58.675 --rc geninfo_all_blocks=1 00:23:58.675 --rc geninfo_unexecuted_blocks=1 00:23:58.675 00:23:58.675 ' 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:58.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:58.675 --rc genhtml_branch_coverage=1 00:23:58.675 --rc genhtml_function_coverage=1 00:23:58.675 --rc genhtml_legend=1 00:23:58.675 --rc geninfo_all_blocks=1 00:23:58.675 --rc geninfo_unexecuted_blocks=1 00:23:58.675 00:23:58.675 ' 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:58.675 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:58.675 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # nvmf_veth_init 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:23:58.676 Cannot find device "nvmf_init_br" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:23:58.676 Cannot find device "nvmf_init_br2" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:23:58.676 Cannot find device "nvmf_tgt_br" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@164 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:23:58.676 Cannot find device "nvmf_tgt_br2" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@165 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:23:58.676 Cannot find device "nvmf_init_br" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@166 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:23:58.676 Cannot find device "nvmf_init_br2" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@167 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:23:58.676 Cannot find device "nvmf_tgt_br" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@168 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:23:58.676 Cannot find device "nvmf_tgt_br2" 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # true 00:23:58.676 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:23:58.935 Cannot find device "nvmf_br" 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@170 -- # true 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:23:58.935 Cannot find device "nvmf_init_if" 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # true 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:23:58.935 Cannot find device "nvmf_init_if2" 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # true 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:58.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@173 -- # true 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:58.935 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@174 -- # true 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:23:58.935 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:23:59.195 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:59.195 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.091 ms 00:23:59.195 00:23:59.195 --- 10.0.0.3 ping statistics --- 00:23:59.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.195 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:23:59.195 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:23:59.195 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.047 ms 00:23:59.195 00:23:59.195 --- 10.0.0.4 ping statistics --- 00:23:59.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.195 rtt min/avg/max/mdev = 0.047/0.047/0.047/0.000 ms 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:59.195 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:59.195 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:23:59.195 00:23:59.195 --- 10.0.0.1 ping statistics --- 00:23:59.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.195 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:23:59.195 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:59.195 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 00:23:59.195 00:23:59.195 --- 10.0.0.2 ping statistics --- 00:23:59.195 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:59.195 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # return 0 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=84331 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 84331 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84331 ']' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.195 01:44:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.134 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:00.134 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:00.134 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:00.134 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:00.134 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1ddf4ccf727c9505a5a395ce6d7beef9 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.Ljw 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1ddf4ccf727c9505a5a395ce6d7beef9 0 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1ddf4ccf727c9505a5a395ce6d7beef9 0 00:24:00.393 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1ddf4ccf727c9505a5a395ce6d7beef9 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.Ljw 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.Ljw 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.Ljw 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1f4da177cc40f6ef3f35be4e6cc440912f99a9efbaa9a5ecf113d126653b35e4 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.KpZ 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1f4da177cc40f6ef3f35be4e6cc440912f99a9efbaa9a5ecf113d126653b35e4 3 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1f4da177cc40f6ef3f35be4e6cc440912f99a9efbaa9a5ecf113d126653b35e4 3 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1f4da177cc40f6ef3f35be4e6cc440912f99a9efbaa9a5ecf113d126653b35e4 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.KpZ 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.KpZ 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.KpZ 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2f142a03588873da629d62fdab1e0332af3870e4090fe0cd 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.m1X 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2f142a03588873da629d62fdab1e0332af3870e4090fe0cd 0 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2f142a03588873da629d62fdab1e0332af3870e4090fe0cd 0 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2f142a03588873da629d62fdab1e0332af3870e4090fe0cd 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.m1X 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.m1X 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.m1X 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=91f2ba7c41c220d7e63699a9fe5b501aa494724c282b42de 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.0Ml 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 91f2ba7c41c220d7e63699a9fe5b501aa494724c282b42de 2 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 91f2ba7c41c220d7e63699a9fe5b501aa494724c282b42de 2 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=91f2ba7c41c220d7e63699a9fe5b501aa494724c282b42de 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:00.394 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.0Ml 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.0Ml 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.0Ml 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=28f131c068983ff70de343035ed4fe77 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qaf 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 28f131c068983ff70de343035ed4fe77 1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 28f131c068983ff70de343035ed4fe77 1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=28f131c068983ff70de343035ed4fe77 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qaf 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qaf 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qaf 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=3c990b8cf4e2561797f55077da5cb7f4 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Kqn 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 3c990b8cf4e2561797f55077da5cb7f4 1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 3c990b8cf4e2561797f55077da5cb7f4 1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=3c990b8cf4e2561797f55077da5cb7f4 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:24:00.654 01:44:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Kqn 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Kqn 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Kqn 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=83fbc7c624165ddcd62c9dbf598cfbe873428c867361ed27 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.i6c 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 83fbc7c624165ddcd62c9dbf598cfbe873428c867361ed27 2 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 83fbc7c624165ddcd62c9dbf598cfbe873428c867361ed27 2 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=83fbc7c624165ddcd62c9dbf598cfbe873428c867361ed27 00:24:00.654 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.i6c 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.i6c 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.i6c 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dafd03504cb3540e023dd73b52302ffa 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.xO6 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dafd03504cb3540e023dd73b52302ffa 0 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dafd03504cb3540e023dd73b52302ffa 0 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dafd03504cb3540e023dd73b52302ffa 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:24:00.655 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.xO6 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.xO6 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.xO6 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=61eebe1baf52f786d2a2bf14cb744a5c0a64e49ececf8855019b73fd62722b36 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.D2t 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 61eebe1baf52f786d2a2bf14cb744a5c0a64e49ececf8855019b73fd62722b36 3 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 61eebe1baf52f786d2a2bf14cb744a5c0a64e49ececf8855019b73fd62722b36 3 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=61eebe1baf52f786d2a2bf14cb744a5c0a64e49ececf8855019b73fd62722b36 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.D2t 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.D2t 00:24:00.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.D2t 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 84331 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 84331 ']' 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:00.912 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.170 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Ljw 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.KpZ ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.KpZ 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.m1X 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.0Ml ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.0Ml 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qaf 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Kqn ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Kqn 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.i6c 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.xO6 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.xO6 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.D2t 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:01.171 01:44:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:24:01.738 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:01.738 Waiting for block devices as requested 00:24:01.738 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:24:01.738 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:24:02.307 No valid GPT data, bailing 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:24:02.307 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:24:02.566 No valid GPT data, bailing 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:24:02.566 No valid GPT data, bailing 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:24:02.566 No valid GPT data, bailing 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:24:02.566 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:24:02.567 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:02.567 01:44:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -a 10.0.0.1 -t tcp -s 4420 00:24:02.567 00:24:02.567 Discovery Log Number of Records 2, Generation counter 2 00:24:02.567 =====Discovery Log Entry 0====== 00:24:02.567 trtype: tcp 00:24:02.567 adrfam: ipv4 00:24:02.567 subtype: current discovery subsystem 00:24:02.567 treq: not specified, sq flow control disable supported 00:24:02.567 portid: 1 00:24:02.567 trsvcid: 4420 00:24:02.567 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:02.567 traddr: 10.0.0.1 00:24:02.567 eflags: none 00:24:02.567 sectype: none 00:24:02.567 =====Discovery Log Entry 1====== 00:24:02.567 trtype: tcp 00:24:02.567 adrfam: ipv4 00:24:02.567 subtype: nvme subsystem 00:24:02.567 treq: not specified, sq flow control disable supported 00:24:02.567 portid: 1 00:24:02.567 trsvcid: 4420 00:24:02.567 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:02.567 traddr: 10.0.0.1 00:24:02.567 eflags: none 00:24:02.567 sectype: none 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:02.567 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.826 nvme0n1 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:02.826 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.086 nvme0n1 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.086 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.087 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.346 nvme0n1 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:03.346 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.347 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 nvme0n1 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 nvme0n1 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.607 01:44:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:03.607 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.867 nvme0n1 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:03.867 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.126 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.386 nvme0n1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.386 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.646 nvme0n1 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.646 01:44:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.646 nvme0n1 00:24:04.646 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.646 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.646 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.646 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.646 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 nvme0n1 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:04.906 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.166 nvme0n1 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.166 01:44:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.734 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.994 nvme0n1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.994 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.262 nvme0n1 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.262 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.263 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.578 nvme0n1 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.578 01:44:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.847 nvme0n1 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.847 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.106 nvme0n1 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:07.106 01:44:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.012 01:44:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.012 nvme0n1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.012 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.271 nvme0n1 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.271 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.528 01:44:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.786 nvme0n1 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.786 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.045 nvme0n1 00:24:10.045 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.045 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.045 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.045 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.045 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.045 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.304 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.563 nvme0n1 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:10.563 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:10.564 01:44:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.131 nvme0n1 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.131 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.132 01:44:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.699 nvme0n1 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:11.699 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:11.700 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:11.700 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:11.700 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:11.700 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.267 nvme0n1 00:24:12.267 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.267 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.267 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.268 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:12.527 01:44:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.096 nvme0n1 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.096 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.097 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.665 nvme0n1 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.665 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.666 01:44:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 nvme0n1 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.666 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.926 nvme0n1 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:13.926 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:13.927 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.186 nvme0n1 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.186 nvme0n1 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.186 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.187 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.187 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.187 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.447 nvme0n1 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.447 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.707 nvme0n1 00:24:14.707 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.707 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.707 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.707 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.707 01:44:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.707 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 nvme0n1 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 nvme0n1 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.967 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 nvme0n1 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.227 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.228 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.487 nvme0n1 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.487 01:44:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.747 nvme0n1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:15.747 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.006 nvme0n1 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.006 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.007 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.266 nvme0n1 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.266 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.526 nvme0n1 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.526 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.527 01:44:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.786 nvme0n1 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:16.786 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:16.787 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:16.787 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:16.787 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.046 nvme0n1 00:24:17.046 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.046 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.046 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.046 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.046 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.046 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.305 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.306 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:17.306 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.306 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.565 nvme0n1 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.565 01:44:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.824 nvme0n1 00:24:17.824 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:17.824 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:17.824 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:17.824 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:17.824 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:17.825 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.084 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.344 nvme0n1 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.344 01:44:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.603 nvme0n1 00:24:18.603 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.603 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:18.603 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:18.603 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.603 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.603 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:18.862 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:18.863 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 nvme0n1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.431 01:44:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.999 nvme0n1 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:19.999 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.000 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.000 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.000 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:20.000 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.000 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.568 nvme0n1 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:20.568 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:20.569 01:44:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.137 nvme0n1 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.137 01:44:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.705 nvme0n1 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.705 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.964 nvme0n1 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:21.964 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:21.965 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 nvme0n1 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 nvme0n1 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.224 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.484 nvme0n1 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:22.484 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.485 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 nvme0n1 00:24:22.744 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.744 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.744 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.744 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.744 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 01:44:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 nvme0n1 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:22.744 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 nvme0n1 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.004 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.263 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.264 nvme0n1 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.264 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.523 nvme0n1 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.523 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.524 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.783 nvme0n1 00:24:23.783 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.783 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:23.783 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:23.783 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.783 01:44:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:23.783 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:23.784 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.043 nvme0n1 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.043 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.044 nvme0n1 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.044 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.303 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.303 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.303 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.303 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.303 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.304 nvme0n1 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.304 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.563 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.564 nvme0n1 00:24:24.564 01:44:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.564 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.564 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.564 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.564 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.564 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:24.823 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:24.824 nvme0n1 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:24.824 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.083 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.342 nvme0n1 00:24:25.342 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.342 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.342 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.342 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.342 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.342 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.343 01:44:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.602 nvme0n1 00:24:25.602 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.602 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:25.602 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:25.602 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.602 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.602 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:25.861 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:25.862 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.121 nvme0n1 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.121 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.380 nvme0n1 00:24:26.380 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.380 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.380 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.380 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.380 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.380 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:26.639 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.640 01:44:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.899 nvme0n1 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MWRkZjRjY2Y3MjdjOTUwNWE1YTM5NWNlNmQ3YmVlZjk8wMP4: 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MWY0ZGExNzdjYzQwZjZlZjNmMzViZTRlNmNjNDQwOTEyZjk5YTllZmJhYTlhNWVjZjExM2QxMjY2NTNiMzVlNIDtARg=: 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:26.899 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.467 nvme0n1 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:27.467 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:27.468 01:44:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.035 nvme0n1 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:28.035 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.036 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.604 nvme0n1 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ODNmYmM3YzYyNDE2NWRkY2Q2MmM5ZGJmNTk4Y2ZiZTg3MzQyOGM4NjczNjFlZDI3nBYTJg==: 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZGFmZDAzNTA0Y2IzNTQwZTAyM2RkNzNiNTIzMDJmZmExmAjo: 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:28.604 01:44:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.172 nvme0n1 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:NjFlZWJlMWJhZjUyZjc4NmQyYTJiZjE0Y2I3NDRhNWMwYTY0ZTQ5ZWNlY2Y4ODU1MDE5YjczZmQ2MjcyMmIzNp7XNZU=: 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.172 01:44:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.747 nvme0n1 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:29.747 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.748 request: 00:24:29.748 { 00:24:29.748 "name": "nvme0", 00:24:29.748 "trtype": "tcp", 00:24:29.748 "traddr": "10.0.0.1", 00:24:29.748 "adrfam": "ipv4", 00:24:29.748 "trsvcid": "4420", 00:24:29.748 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:29.748 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:29.748 "prchk_reftag": false, 00:24:29.748 "prchk_guard": false, 00:24:29.748 "hdgst": false, 00:24:29.748 "ddgst": false, 00:24:29.748 "allow_unrecognized_csi": false, 00:24:29.748 "method": "bdev_nvme_attach_controller", 00:24:29.748 "req_id": 1 00:24:29.748 } 00:24:29.748 Got JSON-RPC error response 00:24:29.748 response: 00:24:29.748 { 00:24:29.748 "code": -5, 00:24:29.748 "message": "Input/output error" 00:24:29.748 } 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:29.748 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.016 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.017 request: 00:24:30.017 { 00:24:30.017 "name": "nvme0", 00:24:30.017 "trtype": "tcp", 00:24:30.017 "traddr": "10.0.0.1", 00:24:30.017 "adrfam": "ipv4", 00:24:30.017 "trsvcid": "4420", 00:24:30.017 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:30.017 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:30.017 "prchk_reftag": false, 00:24:30.017 "prchk_guard": false, 00:24:30.017 "hdgst": false, 00:24:30.017 "ddgst": false, 00:24:30.017 "dhchap_key": "key2", 00:24:30.017 "allow_unrecognized_csi": false, 00:24:30.017 "method": "bdev_nvme_attach_controller", 00:24:30.017 "req_id": 1 00:24:30.017 } 00:24:30.017 Got JSON-RPC error response 00:24:30.017 response: 00:24:30.017 { 00:24:30.017 "code": -5, 00:24:30.017 "message": "Input/output error" 00:24:30.017 } 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.017 request: 00:24:30.017 { 00:24:30.017 "name": "nvme0", 00:24:30.017 "trtype": "tcp", 00:24:30.017 "traddr": "10.0.0.1", 00:24:30.017 "adrfam": "ipv4", 00:24:30.017 "trsvcid": "4420", 00:24:30.017 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:30.017 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:30.017 "prchk_reftag": false, 00:24:30.017 "prchk_guard": false, 00:24:30.017 "hdgst": false, 00:24:30.017 "ddgst": false, 00:24:30.017 "dhchap_key": "key1", 00:24:30.017 "dhchap_ctrlr_key": "ckey2", 00:24:30.017 "allow_unrecognized_csi": false, 00:24:30.017 "method": "bdev_nvme_attach_controller", 00:24:30.017 "req_id": 1 00:24:30.017 } 00:24:30.017 Got JSON-RPC error response 00:24:30.017 response: 00:24:30.017 { 00:24:30.017 "code": -5, 00:24:30.017 "message": "Input/output error" 00:24:30.017 } 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.017 nvme0n1 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:30.017 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:30.276 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.277 request: 00:24:30.277 { 00:24:30.277 "name": "nvme0", 00:24:30.277 "dhchap_key": "key1", 00:24:30.277 "dhchap_ctrlr_key": "ckey2", 00:24:30.277 "method": "bdev_nvme_set_keys", 00:24:30.277 "req_id": 1 00:24:30.277 } 00:24:30.277 Got JSON-RPC error response 00:24:30.277 response: 00:24:30.277 { 00:24:30.277 "code": -13, 00:24:30.277 "message": "Permission denied" 00:24:30.277 } 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:24:30.277 01:44:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:24:31.226 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:24:31.226 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MmYxNDJhMDM1ODg4NzNkYTYyOWQ2MmZkYWIxZTAzMzJhZjM4NzBlNDA5MGZlMGNkdR+A7Q==: 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: ]] 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:OTFmMmJhN2M0MWMyMjBkN2U2MzY5OWE5ZmU1YjUwMWFhNDk0NzI0YzI4MmI0MmRlEH8aKg==: 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.227 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.486 nvme0n1 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MjhmMTMxYzA2ODk4M2ZmNzBkZTM0MzAzNWVkNGZlNzea2EfH: 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: ]] 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:M2M5OTBiOGNmNGUyNTYxNzk3ZjU1MDc3ZGE1Y2I3ZjSB+R7C: 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.486 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.486 request: 00:24:31.486 { 00:24:31.486 "name": "nvme0", 00:24:31.486 "dhchap_key": "key2", 00:24:31.486 "dhchap_ctrlr_key": "ckey1", 00:24:31.486 "method": "bdev_nvme_set_keys", 00:24:31.486 "req_id": 1 00:24:31.486 } 00:24:31.486 Got JSON-RPC error response 00:24:31.486 response: 00:24:31.486 { 00:24:31.486 "code": -13, 00:24:31.486 "message": "Permission denied" 00:24:31.487 } 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:24:31.487 01:44:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:24:32.422 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:24:32.422 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:24:32.422 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:32.422 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.681 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.682 rmmod nvme_tcp 00:24:32.682 rmmod nvme_fabrics 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 84331 ']' 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 84331 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 84331 ']' 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 84331 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:32.682 01:44:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84331 00:24:32.682 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:32.682 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:32.682 killing process with pid 84331 00:24:32.682 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84331' 00:24:32.682 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 84331 00:24:32.682 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 84331 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:33.619 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@246 -- # remove_spdk_ns 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@300 -- # return 0 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:33.620 01:44:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:24:33.620 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:34.557 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:34.557 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:24:34.557 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:24:34.557 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.Ljw /tmp/spdk.key-null.m1X /tmp/spdk.key-sha256.qaf /tmp/spdk.key-sha384.i6c /tmp/spdk.key-sha512.D2t /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:24:34.557 01:44:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:24:34.816 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:24:35.076 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:35.076 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:24:35.076 00:24:35.076 real 0m36.511s 00:24:35.076 user 0m33.532s 00:24:35.076 sys 0m4.015s 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.076 ************************************ 00:24:35.076 END TEST nvmf_auth_host 00:24:35.076 ************************************ 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:35.076 ************************************ 00:24:35.076 START TEST nvmf_digest 00:24:35.076 ************************************ 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:35.076 * Looking for test storage... 00:24:35.076 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:24:35.076 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.336 --rc genhtml_branch_coverage=1 00:24:35.336 --rc genhtml_function_coverage=1 00:24:35.336 --rc genhtml_legend=1 00:24:35.336 --rc geninfo_all_blocks=1 00:24:35.336 --rc geninfo_unexecuted_blocks=1 00:24:35.336 00:24:35.336 ' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.336 --rc genhtml_branch_coverage=1 00:24:35.336 --rc genhtml_function_coverage=1 00:24:35.336 --rc genhtml_legend=1 00:24:35.336 --rc geninfo_all_blocks=1 00:24:35.336 --rc geninfo_unexecuted_blocks=1 00:24:35.336 00:24:35.336 ' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.336 --rc genhtml_branch_coverage=1 00:24:35.336 --rc genhtml_function_coverage=1 00:24:35.336 --rc genhtml_legend=1 00:24:35.336 --rc geninfo_all_blocks=1 00:24:35.336 --rc geninfo_unexecuted_blocks=1 00:24:35.336 00:24:35.336 ' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:35.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:35.336 --rc genhtml_branch_coverage=1 00:24:35.336 --rc genhtml_function_coverage=1 00:24:35.336 --rc genhtml_legend=1 00:24:35.336 --rc geninfo_all_blocks=1 00:24:35.336 --rc geninfo_unexecuted_blocks=1 00:24:35.336 00:24:35.336 ' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:35.336 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:35.337 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@460 -- # nvmf_veth_init 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:24:35.337 Cannot find device "nvmf_init_br" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:24:35.337 Cannot find device "nvmf_init_br2" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:24:35.337 Cannot find device "nvmf_tgt_br" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@164 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:24:35.337 Cannot find device "nvmf_tgt_br2" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@165 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:24:35.337 Cannot find device "nvmf_init_br" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@166 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:24:35.337 Cannot find device "nvmf_init_br2" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@167 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:24:35.337 Cannot find device "nvmf_tgt_br" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@168 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:24:35.337 Cannot find device "nvmf_tgt_br2" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:24:35.337 Cannot find device "nvmf_br" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@170 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:24:35.337 Cannot find device "nvmf_init_if" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:24:35.337 Cannot find device "nvmf_init_if2" 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:35.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@173 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:35.337 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@174 -- # true 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:35.337 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:24:35.597 01:44:43 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:24:35.597 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:35.597 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.067 ms 00:24:35.597 00:24:35.597 --- 10.0.0.3 ping statistics --- 00:24:35.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.597 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:24:35.597 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:24:35.597 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.043 ms 00:24:35.597 00:24:35.597 --- 10.0.0.4 ping statistics --- 00:24:35.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.597 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:35.597 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.597 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:24:35.597 00:24:35.597 --- 10.0.0.1 ping statistics --- 00:24:35.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.597 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:24:35.597 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.597 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:24:35.597 00:24:35.597 --- 10.0.0.2 ping statistics --- 00:24:35.597 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.597 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@461 -- # return 0 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:35.597 ************************************ 00:24:35.597 START TEST nvmf_digest_clean 00:24:35.597 ************************************ 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:35.597 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=85966 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 85966 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 85966 ']' 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:35.857 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:35.857 01:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:35.857 [2024-11-17 01:44:44.185831] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:35.857 [2024-11-17 01:44:44.185999] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:36.116 [2024-11-17 01:44:44.375420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:36.116 [2024-11-17 01:44:44.499309] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:36.116 [2024-11-17 01:44:44.499383] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:36.116 [2024-11-17 01:44:44.499411] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:36.116 [2024-11-17 01:44:44.499441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:36.116 [2024-11-17 01:44:44.499460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:36.116 [2024-11-17 01:44:44.500907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.684 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:36.684 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:36.684 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:36.684 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:36.684 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.943 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:36.943 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:24:36.943 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:24:36.943 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:24:36.943 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:36.943 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:36.943 [2024-11-17 01:44:45.326434] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:37.202 null0 00:24:37.202 [2024-11-17 01:44:45.424047] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:37.202 [2024-11-17 01:44:45.448173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=85998 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 85998 /var/tmp/bperf.sock 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 85998 ']' 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:37.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:37.202 01:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:37.202 [2024-11-17 01:44:45.543752] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:37.202 [2024-11-17 01:44:45.544066] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85998 ] 00:24:37.462 [2024-11-17 01:44:45.724628] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:37.462 [2024-11-17 01:44:45.848536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:38.397 01:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:38.397 01:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:38.397 01:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:38.397 01:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:38.397 01:44:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:38.656 [2024-11-17 01:44:46.918186] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:38.656 01:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.656 01:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:38.914 nvme0n1 00:24:38.914 01:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:38.914 01:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:39.173 Running I/O for 2 seconds... 00:24:41.044 14605.00 IOPS, 57.05 MiB/s [2024-11-17T01:44:49.503Z] 14732.00 IOPS, 57.55 MiB/s 00:24:41.044 Latency(us) 00:24:41.044 [2024-11-17T01:44:49.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.044 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:41.044 nvme0n1 : 2.01 14757.08 57.64 0.00 0.00 8668.39 8162.21 21924.77 00:24:41.044 [2024-11-17T01:44:49.503Z] =================================================================================================================== 00:24:41.044 [2024-11-17T01:44:49.503Z] Total : 14757.08 57.64 0.00 0.00 8668.39 8162.21 21924.77 00:24:41.044 { 00:24:41.044 "results": [ 00:24:41.044 { 00:24:41.044 "job": "nvme0n1", 00:24:41.044 "core_mask": "0x2", 00:24:41.044 "workload": "randread", 00:24:41.044 "status": "finished", 00:24:41.044 "queue_depth": 128, 00:24:41.044 "io_size": 4096, 00:24:41.044 "runtime": 2.005275, 00:24:41.044 "iops": 14757.078206231066, 00:24:41.044 "mibps": 57.6448367430901, 00:24:41.044 "io_failed": 0, 00:24:41.044 "io_timeout": 0, 00:24:41.044 "avg_latency_us": 8668.385869399592, 00:24:41.044 "min_latency_us": 8162.210909090909, 00:24:41.044 "max_latency_us": 21924.77090909091 00:24:41.044 } 00:24:41.044 ], 00:24:41.044 "core_count": 1 00:24:41.044 } 00:24:41.044 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:41.044 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:41.044 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:41.044 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:41.044 | select(.opcode=="crc32c") 00:24:41.044 | "\(.module_name) \(.executed)"' 00:24:41.044 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 85998 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 85998 ']' 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 85998 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85998 00:24:41.304 killing process with pid 85998 00:24:41.304 Received shutdown signal, test time was about 2.000000 seconds 00:24:41.304 00:24:41.304 Latency(us) 00:24:41.304 [2024-11-17T01:44:49.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.304 [2024-11-17T01:44:49.763Z] =================================================================================================================== 00:24:41.304 [2024-11-17T01:44:49.763Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85998' 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 85998 00:24:41.304 01:44:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 85998 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86066 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86066 /var/tmp/bperf.sock 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86066 ']' 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:42.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:42.241 01:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:42.241 [2024-11-17 01:44:50.528228] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:42.241 [2024-11-17 01:44:50.528618] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86066 ] 00:24:42.241 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:42.241 Zero copy mechanism will not be used. 00:24:42.241 [2024-11-17 01:44:50.694725] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:42.500 [2024-11-17 01:44:50.784454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.068 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.068 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:43.068 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:43.068 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:43.068 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:43.637 [2024-11-17 01:44:51.799031] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:43.637 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.637 01:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:43.896 nvme0n1 00:24:43.896 01:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:43.896 01:44:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:44.155 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.155 Zero copy mechanism will not be used. 00:24:44.155 Running I/O for 2 seconds... 00:24:46.028 7408.00 IOPS, 926.00 MiB/s [2024-11-17T01:44:54.487Z] 7400.00 IOPS, 925.00 MiB/s 00:24:46.028 Latency(us) 00:24:46.028 [2024-11-17T01:44:54.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.028 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:46.028 nvme0n1 : 2.00 7398.55 924.82 0.00 0.00 2159.46 1951.19 3842.79 00:24:46.028 [2024-11-17T01:44:54.487Z] =================================================================================================================== 00:24:46.028 [2024-11-17T01:44:54.487Z] Total : 7398.55 924.82 0.00 0.00 2159.46 1951.19 3842.79 00:24:46.028 { 00:24:46.028 "results": [ 00:24:46.028 { 00:24:46.028 "job": "nvme0n1", 00:24:46.028 "core_mask": "0x2", 00:24:46.028 "workload": "randread", 00:24:46.028 "status": "finished", 00:24:46.028 "queue_depth": 16, 00:24:46.028 "io_size": 131072, 00:24:46.028 "runtime": 2.002555, 00:24:46.028 "iops": 7398.548354477155, 00:24:46.028 "mibps": 924.8185443096444, 00:24:46.028 "io_failed": 0, 00:24:46.028 "io_timeout": 0, 00:24:46.028 "avg_latency_us": 2159.4632947182404, 00:24:46.028 "min_latency_us": 1951.1854545454546, 00:24:46.028 "max_latency_us": 3842.7927272727275 00:24:46.028 } 00:24:46.028 ], 00:24:46.028 "core_count": 1 00:24:46.028 } 00:24:46.028 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:46.028 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:46.028 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:46.028 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:46.028 | select(.opcode=="crc32c") 00:24:46.028 | "\(.module_name) \(.executed)"' 00:24:46.028 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86066 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86066 ']' 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86066 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86066 00:24:46.287 killing process with pid 86066 00:24:46.287 Received shutdown signal, test time was about 2.000000 seconds 00:24:46.287 00:24:46.287 Latency(us) 00:24:46.287 [2024-11-17T01:44:54.746Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:46.287 [2024-11-17T01:44:54.746Z] =================================================================================================================== 00:24:46.287 [2024-11-17T01:44:54.746Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86066' 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86066 00:24:46.287 01:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86066 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86134 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86134 /var/tmp/bperf.sock 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86134 ']' 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:47.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:47.224 01:44:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:47.224 [2024-11-17 01:44:55.625234] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:47.225 [2024-11-17 01:44:55.625653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86134 ] 00:24:47.484 [2024-11-17 01:44:55.800978] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.484 [2024-11-17 01:44:55.888527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:48.052 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:48.052 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:48.052 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:48.052 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:48.052 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:48.619 [2024-11-17 01:44:56.897692] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:48.619 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.619 01:44:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:48.878 nvme0n1 00:24:48.878 01:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:48.878 01:44:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:49.137 Running I/O for 2 seconds... 00:24:51.011 15749.00 IOPS, 61.52 MiB/s [2024-11-17T01:44:59.470Z] 15875.50 IOPS, 62.01 MiB/s 00:24:51.011 Latency(us) 00:24:51.011 [2024-11-17T01:44:59.470Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.011 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:51.011 nvme0n1 : 2.00 15914.60 62.17 0.00 0.00 8036.92 2412.92 17873.45 00:24:51.011 [2024-11-17T01:44:59.470Z] =================================================================================================================== 00:24:51.011 [2024-11-17T01:44:59.470Z] Total : 15914.60 62.17 0.00 0.00 8036.92 2412.92 17873.45 00:24:51.011 { 00:24:51.011 "results": [ 00:24:51.011 { 00:24:51.011 "job": "nvme0n1", 00:24:51.011 "core_mask": "0x2", 00:24:51.011 "workload": "randwrite", 00:24:51.011 "status": "finished", 00:24:51.011 "queue_depth": 128, 00:24:51.011 "io_size": 4096, 00:24:51.011 "runtime": 2.003129, 00:24:51.011 "iops": 15914.601605787746, 00:24:51.011 "mibps": 62.16641252260838, 00:24:51.011 "io_failed": 0, 00:24:51.011 "io_timeout": 0, 00:24:51.011 "avg_latency_us": 8036.917828265401, 00:24:51.011 "min_latency_us": 2412.9163636363637, 00:24:51.011 "max_latency_us": 17873.454545454544 00:24:51.011 } 00:24:51.011 ], 00:24:51.011 "core_count": 1 00:24:51.011 } 00:24:51.011 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:51.011 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:51.011 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:51.011 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:51.011 | select(.opcode=="crc32c") 00:24:51.011 | "\(.module_name) \(.executed)"' 00:24:51.011 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86134 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86134 ']' 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86134 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86134 00:24:51.269 killing process with pid 86134 00:24:51.269 Received shutdown signal, test time was about 2.000000 seconds 00:24:51.269 00:24:51.269 Latency(us) 00:24:51.269 [2024-11-17T01:44:59.728Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:51.269 [2024-11-17T01:44:59.728Z] =================================================================================================================== 00:24:51.269 [2024-11-17T01:44:59.728Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86134' 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86134 00:24:51.269 01:44:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86134 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=86201 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 86201 /var/tmp/bperf.sock 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 86201 ']' 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:52.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:52.206 01:45:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:52.206 [2024-11-17 01:45:00.564681] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:52.206 [2024-11-17 01:45:00.565179] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86201 ] 00:24:52.206 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:52.206 Zero copy mechanism will not be used. 00:24:52.466 [2024-11-17 01:45:00.747103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:52.466 [2024-11-17 01:45:00.835104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:53.404 01:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:53.404 01:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:24:53.404 01:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:53.404 01:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:53.404 01:45:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:53.662 [2024-11-17 01:45:01.892210] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:53.662 01:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.662 01:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.921 nvme0n1 00:24:54.181 01:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:54.181 01:45:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:54.181 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:54.181 Zero copy mechanism will not be used. 00:24:54.181 Running I/O for 2 seconds... 00:24:56.070 5366.00 IOPS, 670.75 MiB/s [2024-11-17T01:45:04.529Z] 5504.00 IOPS, 688.00 MiB/s 00:24:56.070 Latency(us) 00:24:56.070 [2024-11-17T01:45:04.529Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.070 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:56.070 nvme0n1 : 2.00 5502.73 687.84 0.00 0.00 2900.37 2010.76 5659.93 00:24:56.070 [2024-11-17T01:45:04.529Z] =================================================================================================================== 00:24:56.070 [2024-11-17T01:45:04.529Z] Total : 5502.73 687.84 0.00 0.00 2900.37 2010.76 5659.93 00:24:56.070 { 00:24:56.070 "results": [ 00:24:56.070 { 00:24:56.070 "job": "nvme0n1", 00:24:56.070 "core_mask": "0x2", 00:24:56.070 "workload": "randwrite", 00:24:56.070 "status": "finished", 00:24:56.070 "queue_depth": 16, 00:24:56.070 "io_size": 131072, 00:24:56.070 "runtime": 2.004095, 00:24:56.070 "iops": 5502.733153867456, 00:24:56.070 "mibps": 687.841644233432, 00:24:56.070 "io_failed": 0, 00:24:56.070 "io_timeout": 0, 00:24:56.070 "avg_latency_us": 2900.3696719095196, 00:24:56.070 "min_latency_us": 2010.7636363636364, 00:24:56.070 "max_latency_us": 5659.927272727273 00:24:56.070 } 00:24:56.070 ], 00:24:56.070 "core_count": 1 00:24:56.070 } 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:56.365 | select(.opcode=="crc32c") 00:24:56.365 | "\(.module_name) \(.executed)"' 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:56.365 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 86201 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 86201 ']' 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 86201 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86201 00:24:56.637 killing process with pid 86201 00:24:56.637 Received shutdown signal, test time was about 2.000000 seconds 00:24:56.637 00:24:56.637 Latency(us) 00:24:56.637 [2024-11-17T01:45:05.096Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:56.637 [2024-11-17T01:45:05.096Z] =================================================================================================================== 00:24:56.637 [2024-11-17T01:45:05.096Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86201' 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 86201 00:24:56.637 01:45:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 86201 00:24:57.206 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 85966 00:24:57.206 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 85966 ']' 00:24:57.206 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 85966 00:24:57.206 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:24:57.206 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:57.206 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85966 00:24:57.465 killing process with pid 85966 00:24:57.465 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:57.465 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:57.465 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85966' 00:24:57.465 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 85966 00:24:57.465 01:45:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 85966 00:24:58.034 00:24:58.034 real 0m22.426s 00:24:58.034 user 0m43.438s 00:24:58.034 sys 0m4.448s 00:24:58.034 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.034 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:58.034 ************************************ 00:24:58.034 END TEST nvmf_digest_clean 00:24:58.034 ************************************ 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:58.293 ************************************ 00:24:58.293 START TEST nvmf_digest_error 00:24:58.293 ************************************ 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=86303 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 86303 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86303 ']' 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:58.293 01:45:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:58.293 [2024-11-17 01:45:06.673493] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:58.293 [2024-11-17 01:45:06.673987] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:58.553 [2024-11-17 01:45:06.855104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.553 [2024-11-17 01:45:06.934113] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:58.553 [2024-11-17 01:45:06.934428] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:58.553 [2024-11-17 01:45:06.934574] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:58.553 [2024-11-17 01:45:06.934605] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:58.553 [2024-11-17 01:45:06.934620] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:58.553 [2024-11-17 01:45:06.935890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.121 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.380 [2024-11-17 01:45:07.580715] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:59.380 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.380 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:59.380 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:59.380 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.380 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.380 [2024-11-17 01:45:07.732289] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:24:59.380 null0 00:24:59.380 [2024-11-17 01:45:07.827831] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:59.639 [2024-11-17 01:45:07.852101] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86335 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86335 /var/tmp/bperf.sock 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86335 ']' 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:59.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:59.639 01:45:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:59.639 [2024-11-17 01:45:07.942659] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:24:59.639 [2024-11-17 01:45:07.942795] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86335 ] 00:24:59.898 [2024-11-17 01:45:08.111734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:59.898 [2024-11-17 01:45:08.234423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:00.156 [2024-11-17 01:45:08.416767] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:00.415 01:45:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:00.415 01:45:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:00.415 01:45:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:00.415 01:45:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:00.674 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:00.674 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.674 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:00.674 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.674 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:00.674 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:00.932 nvme0n1 00:25:01.191 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:01.191 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.191 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:01.191 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.191 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:01.191 01:45:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:01.191 Running I/O for 2 seconds... 00:25:01.191 [2024-11-17 01:45:09.525615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.525894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.526025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.191 [2024-11-17 01:45:09.543495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.543556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.543580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.191 [2024-11-17 01:45:09.561033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.561094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.561115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.191 [2024-11-17 01:45:09.578308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.578374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.578392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.191 [2024-11-17 01:45:09.596481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.596705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.596736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.191 [2024-11-17 01:45:09.616186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.616253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13989 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.616272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.191 [2024-11-17 01:45:09.634585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.191 [2024-11-17 01:45:09.634790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.191 [2024-11-17 01:45:09.634856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.654059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.654286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.654478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.672875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.673097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:2915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.673283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.691487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.691716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17381 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.691980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.709914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.710137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:21013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.710323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.728191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.728412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:10548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.728615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.746562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.746776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.746995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.764991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.765220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:693 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.765403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.784125] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.784369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.784518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.804077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.804321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.804550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.823145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.823400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.823598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.842037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.842246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.842273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.860126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.860361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23148 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.860385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.878264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.451 [2024-11-17 01:45:09.878323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:8302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.451 [2024-11-17 01:45:09.878345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.451 [2024-11-17 01:45:09.895305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.452 [2024-11-17 01:45:09.895525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.452 [2024-11-17 01:45:09.895549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.711 [2024-11-17 01:45:09.914252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.711 [2024-11-17 01:45:09.914317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7665 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.711 [2024-11-17 01:45:09.914336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.711 [2024-11-17 01:45:09.931364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.711 [2024-11-17 01:45:09.931423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:11490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.711 [2024-11-17 01:45:09.931447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.711 [2024-11-17 01:45:09.948600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.711 [2024-11-17 01:45:09.948831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:11357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.711 [2024-11-17 01:45:09.948856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.711 [2024-11-17 01:45:09.965944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.711 [2024-11-17 01:45:09.966181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:10167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.711 [2024-11-17 01:45:09.966327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.711 [2024-11-17 01:45:09.983269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.711 [2024-11-17 01:45:09.983497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.711 [2024-11-17 01:45:09.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.711 [2024-11-17 01:45:10.001088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.001332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.001587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.021994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.022283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.022433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.041520] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.041758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.041915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.060126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.060378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.060617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.078022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.078249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.078498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.095845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.096101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13925 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.096237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.115274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.115520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.115912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.136010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.136293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.136323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.712 [2024-11-17 01:45:10.154737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.712 [2024-11-17 01:45:10.154806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:7859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.712 [2024-11-17 01:45:10.154855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.173245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.173307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:16700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.971 [2024-11-17 01:45:10.173330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.190437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.190502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:19394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.971 [2024-11-17 01:45:10.190520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.207543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.207632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.971 [2024-11-17 01:45:10.207668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.224569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.224785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.971 [2024-11-17 01:45:10.224829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.241681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.241937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:15316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.971 [2024-11-17 01:45:10.242125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.259282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.259518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.971 [2024-11-17 01:45:10.259675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.971 [2024-11-17 01:45:10.278185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.971 [2024-11-17 01:45:10.278422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:11292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.278663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.297165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.297401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:14780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.297597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.314722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.314962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:18597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.315210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.332913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.333140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2389 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.333335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.350488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.350724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23388 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.350867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.367865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.368077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7208 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.368107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.385132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.385191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.385212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.402082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.402150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.402168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:01.972 [2024-11-17 01:45:10.419019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:01.972 [2024-11-17 01:45:10.419231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:01.972 [2024-11-17 01:45:10.419260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.437774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.437861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.437883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.455149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.455216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:16120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.455234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.472544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.472754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.472784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.489923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.489982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2811 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.490003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 13916.00 IOPS, 54.36 MiB/s [2024-11-17T01:45:10.690Z] [2024-11-17 01:45:10.507118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.507182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18977 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.507201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.524286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.524501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.524530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.541550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.541609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:25474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.541630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.231 [2024-11-17 01:45:10.558854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.231 [2024-11-17 01:45:10.558918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.231 [2024-11-17 01:45:10.558936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.575756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.575826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:9169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.575849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.592764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.592850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:15518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.592873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.610013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.610077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8273 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.610095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.627270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.627327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.627350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.644319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.644550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.644579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.669067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.669126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:12988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.669147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.232 [2024-11-17 01:45:10.686339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.232 [2024-11-17 01:45:10.686399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.232 [2024-11-17 01:45:10.686420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.704472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.704694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.704718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.721838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.721896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:24566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.721917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.738881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.739089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.739119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.756390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.756609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:24147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.756634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.774893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.775111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:6011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.775143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.793334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.793398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:1071 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.793416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.810508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.810732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:22266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.810756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.827855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.827916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:16311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.827953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.845047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.845122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:7842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.845141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.862099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.491 [2024-11-17 01:45:10.862164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:18408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.491 [2024-11-17 01:45:10.862183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.491 [2024-11-17 01:45:10.880612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.492 [2024-11-17 01:45:10.880675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:4758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.492 [2024-11-17 01:45:10.880697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.492 [2024-11-17 01:45:10.901044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.492 [2024-11-17 01:45:10.901114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.492 [2024-11-17 01:45:10.901135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.492 [2024-11-17 01:45:10.919382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.492 [2024-11-17 01:45:10.919443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.492 [2024-11-17 01:45:10.919465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.492 [2024-11-17 01:45:10.937472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.492 [2024-11-17 01:45:10.937540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4699 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.492 [2024-11-17 01:45:10.937559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:10.956997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:10.957064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:24123 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.751 [2024-11-17 01:45:10.957084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:10.975376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:10.975439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.751 [2024-11-17 01:45:10.975461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:10.993525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:10.993592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.751 [2024-11-17 01:45:10.993611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:11.011848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:11.011912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:19891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.751 [2024-11-17 01:45:11.011938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:11.030229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:11.030444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:5824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.751 [2024-11-17 01:45:11.030477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:11.048621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:11.048854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:23384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.751 [2024-11-17 01:45:11.048879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.751 [2024-11-17 01:45:11.066967] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.751 [2024-11-17 01:45:11.067181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.067212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.085509] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.085731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.085756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.104286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.104492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:8553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.104517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.122546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.122775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.122927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.142437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.142667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:8243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.143045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.163634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.163868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.164153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.182248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.182480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:02.752 [2024-11-17 01:45:11.200186] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:02.752 [2024-11-17 01:45:11.200415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:02.752 [2024-11-17 01:45:11.200671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.011 [2024-11-17 01:45:11.219519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.011 [2024-11-17 01:45:11.219763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.011 [2024-11-17 01:45:11.219929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.011 [2024-11-17 01:45:11.237167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.011 [2024-11-17 01:45:11.237394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:9404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.237527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.255672] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.255737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.255756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.274234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.274294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.274313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.292330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.292539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:2632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.292562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.309875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.309934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:12800 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.309953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.327222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.327281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:25483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.327299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.344628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.344688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.344706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.361880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.361938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.361956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.379114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.379174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:3336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.379192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.396322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.396537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:24128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.396562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.413762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.413848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:2213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.413867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.431394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.431453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.431471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.448944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.449142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:15260 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.449166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.012 [2024-11-17 01:45:11.466909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.012 [2024-11-17 01:45:11.466971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.012 [2024-11-17 01:45:11.466990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.271 [2024-11-17 01:45:11.485110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.271 [2024-11-17 01:45:11.485169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.271 [2024-11-17 01:45:11.485187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.271 [2024-11-17 01:45:11.503782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:03.271 [2024-11-17 01:45:11.503874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:03.271 [2024-11-17 01:45:11.503895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:03.271 13979.00 IOPS, 54.61 MiB/s 00:25:03.271 Latency(us) 00:25:03.271 [2024-11-17T01:45:11.730Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.271 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:03.271 nvme0n1 : 2.01 14017.14 54.75 0.00 0.00 9124.99 8221.79 32887.16 00:25:03.271 [2024-11-17T01:45:11.730Z] =================================================================================================================== 00:25:03.271 [2024-11-17T01:45:11.730Z] Total : 14017.14 54.75 0.00 0.00 9124.99 8221.79 32887.16 00:25:03.271 { 00:25:03.271 "results": [ 00:25:03.271 { 00:25:03.271 "job": "nvme0n1", 00:25:03.271 "core_mask": "0x2", 00:25:03.271 "workload": "randread", 00:25:03.271 "status": "finished", 00:25:03.271 "queue_depth": 128, 00:25:03.271 "io_size": 4096, 00:25:03.271 "runtime": 2.012679, 00:25:03.271 "iops": 14017.138351421165, 00:25:03.271 "mibps": 54.754446685238925, 00:25:03.271 "io_failed": 0, 00:25:03.271 "io_timeout": 0, 00:25:03.271 "avg_latency_us": 9124.98604900558, 00:25:03.271 "min_latency_us": 8221.789090909091, 00:25:03.271 "max_latency_us": 32887.156363636364 00:25:03.271 } 00:25:03.271 ], 00:25:03.271 "core_count": 1 00:25:03.271 } 00:25:03.271 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:03.271 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:03.271 | .driver_specific 00:25:03.271 | .nvme_error 00:25:03.271 | .status_code 00:25:03.271 | .command_transient_transport_error' 00:25:03.272 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:03.272 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 110 > 0 )) 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86335 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86335 ']' 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86335 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86335 00:25:03.531 killing process with pid 86335 00:25:03.531 Received shutdown signal, test time was about 2.000000 seconds 00:25:03.531 00:25:03.531 Latency(us) 00:25:03.531 [2024-11-17T01:45:11.990Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:03.531 [2024-11-17T01:45:11.990Z] =================================================================================================================== 00:25:03.531 [2024-11-17T01:45:11.990Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86335' 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86335 00:25:03.531 01:45:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86335 00:25:04.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86402 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86402 /var/tmp/bperf.sock 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86402 ']' 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.469 01:45:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:04.469 [2024-11-17 01:45:12.701967] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:04.469 [2024-11-17 01:45:12.702359] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6I/O size of 131072 is greater than zero copy threshold (65536). 00:25:04.469 Zero copy mechanism will not be used. 00:25:04.469 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86402 ] 00:25:04.469 [2024-11-17 01:45:12.880496] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.728 [2024-11-17 01:45:12.963363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:04.729 [2024-11-17 01:45:13.107114] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:05.297 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.297 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:05.297 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:05.297 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:05.556 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:05.556 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.556 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.556 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.556 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.556 01:45:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:05.816 nvme0n1 00:25:05.816 01:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:05.816 01:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.816 01:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:05.816 01:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.816 01:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:05.816 01:45:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:05.816 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:05.816 Zero copy mechanism will not be used. 00:25:05.816 Running I/O for 2 seconds... 00:25:05.816 [2024-11-17 01:45:14.252434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.816 [2024-11-17 01:45:14.252673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.816 [2024-11-17 01:45:14.252711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:05.816 [2024-11-17 01:45:14.257502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.816 [2024-11-17 01:45:14.257567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.816 [2024-11-17 01:45:14.257586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:05.816 [2024-11-17 01:45:14.262399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.816 [2024-11-17 01:45:14.262469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.816 [2024-11-17 01:45:14.262489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:05.816 [2024-11-17 01:45:14.267029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.816 [2024-11-17 01:45:14.267089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.816 [2024-11-17 01:45:14.267110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:05.816 [2024-11-17 01:45:14.272264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:05.816 [2024-11-17 01:45:14.272323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:05.816 [2024-11-17 01:45:14.272345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.075 [2024-11-17 01:45:14.277579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.277647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.277666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.282330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.282556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.282580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.287200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.287260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.287281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.292063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.292121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.292142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.296670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.296735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.296754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.301362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.301577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.301602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.306212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.306272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.306293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.310802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.310866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.310885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.315334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.315399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.315418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.319979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.320040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.320064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.324575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.324634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.324656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.329216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.329281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.329299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.333845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.333909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.333927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.338311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.338370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.338391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.342930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.342988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.343008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.347378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.347442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.347461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.352221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.352285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.352303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.356890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.356959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.356979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.361523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.361582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.361605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.366171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.366238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.366256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.370719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.370941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.370972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.375501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.375560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.375581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.380320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.380384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.380403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.384997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.385064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.385082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.389639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.389698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.389720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.076 [2024-11-17 01:45:14.394258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.076 [2024-11-17 01:45:14.394467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.076 [2024-11-17 01:45:14.394509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.399156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.399237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.399256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.403752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.403834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.403854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.408334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.408393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.408413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.412989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.413048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.413069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.417655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.417722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.417741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.422621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.422688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.422707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.427334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.427393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.427416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.432078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.432123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.432144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.436692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.436746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.436765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.441512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.441572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.441593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.446251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.446309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.446332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.450862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.450928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.450947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.455379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.455444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.455462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.460225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.460283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.460303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.464888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.464945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.464967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.469467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.469535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.469553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.474181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.474246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.474264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.478782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.478851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.478874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.483330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.483389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.483411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.487851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.487908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.487928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.492483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.492549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.492568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.497191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.497250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.497271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.501738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.501803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.501852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.506330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.506394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.506413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.510957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.511014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.511036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.515605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.515691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.515714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.077 [2024-11-17 01:45:14.520363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.077 [2024-11-17 01:45:14.520430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.077 [2024-11-17 01:45:14.520448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.078 [2024-11-17 01:45:14.525027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.078 [2024-11-17 01:45:14.525091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.078 [2024-11-17 01:45:14.525109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.078 [2024-11-17 01:45:14.529928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.078 [2024-11-17 01:45:14.529987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.078 [2024-11-17 01:45:14.530008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.535124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.535184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.535207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.540250] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.540467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.540492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.545215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.545449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.545580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.550281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.550507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.550646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.555485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.555719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.555957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.560758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.560993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.561137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.565964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.566176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.566340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.571134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.571352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.571540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.576475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.576691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.338 [2024-11-17 01:45:14.576847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.338 [2024-11-17 01:45:14.581629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.338 [2024-11-17 01:45:14.581869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.582002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.586620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.586685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.586703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.591171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.591230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.591251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.595797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.595853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.595875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.600332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.600410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.600428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.605066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.605133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.605151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.609594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.609653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.609675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.614311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.614524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.614554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.619261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.619344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.619363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.623935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.624016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.624050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.628466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.628525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.628548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.633137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.633195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.633215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.637749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.637841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.637863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.642305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.642364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.642385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.646873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.646931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.646954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.651995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.652058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.652080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.656983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.657051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.657071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.661766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.661871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.661894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.667113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.667178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.667215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.339 [2024-11-17 01:45:14.672343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.339 [2024-11-17 01:45:14.672413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.339 [2024-11-17 01:45:14.672432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.677607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.677674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.677692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.682848] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.682949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.682969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.687881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.687945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.687988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.693134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.693212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.693237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.698291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.698360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.698380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.703541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.703618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.703655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.708546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.708776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.708819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.713784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.714029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.714174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.719049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.719291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.719440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.724370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.724609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.724818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.729622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.729851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.729995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.734857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.735088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.735231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.740231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.740471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.740607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.745576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.745804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.745978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.751005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.751250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.751390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.756463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.756686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.756921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.761607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.761855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.761987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.766926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.766996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.767015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.340 [2024-11-17 01:45:14.771705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.340 [2024-11-17 01:45:14.771768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.340 [2024-11-17 01:45:14.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.341 [2024-11-17 01:45:14.776486] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.341 [2024-11-17 01:45:14.776548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.341 [2024-11-17 01:45:14.776570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.341 [2024-11-17 01:45:14.781180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.341 [2024-11-17 01:45:14.781246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.341 [2024-11-17 01:45:14.781265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.341 [2024-11-17 01:45:14.786102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.341 [2024-11-17 01:45:14.786154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.341 [2024-11-17 01:45:14.786173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.341 [2024-11-17 01:45:14.790899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.341 [2024-11-17 01:45:14.790985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.341 [2024-11-17 01:45:14.791023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.601 [2024-11-17 01:45:14.796227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.601 [2024-11-17 01:45:14.796319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.796341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.801112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.801197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.801217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.806281] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.806348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.806367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.811005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.811064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.811091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.815764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.815855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.815880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.820553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.820781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.820818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.825581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.825825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.826053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.830861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.831084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.831221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.836102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.836313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.836482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.841279] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.841521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.841657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.846759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.847009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.847209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.852051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.852269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.852409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.857112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.857342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.857486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.862324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.862530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.862726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.867722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.867973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.868177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.873091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.873311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.873451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.878223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.878450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.878634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.883408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.883662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.883813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.889049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.889285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.889469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.894267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.894492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.894624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.899513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.899754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.899982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.904813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.905038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.905062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.909710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.909771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.909788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.914178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.914237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.914255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.918845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.918904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.918922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.923320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.923379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.602 [2024-11-17 01:45:14.923397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.602 [2024-11-17 01:45:14.928053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.602 [2024-11-17 01:45:14.928114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.928133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.932617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.932676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.932694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.937256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.937315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.937333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.942025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.942084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.942101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.946607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.946666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.946684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.951291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.951350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.951368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.956067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.956123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.956141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.960528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.960587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.960604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.965169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.965244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.965261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.969891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.969949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.969966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.974570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.974629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.974647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.979237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.979295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.979312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.983783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.983853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.983872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.988354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.988413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.988430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.993011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.993071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.993088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:14.997558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:14.997618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:14.997635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.002166] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.002224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.002242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.006753] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.006840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.006860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.011347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.011407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.011424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.016086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.016145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.016162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.020755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.020872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.020909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.025442] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.025500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.025517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.030057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.030116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.030133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.034608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.034667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.034684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.039233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.039292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.039309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.043756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.043830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.043850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.048462] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.603 [2024-11-17 01:45:15.048521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.603 [2024-11-17 01:45:15.048539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.603 [2024-11-17 01:45:15.053118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.604 [2024-11-17 01:45:15.053177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.604 [2024-11-17 01:45:15.053194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.058258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.058320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.058339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.063069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.063130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.063164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.067956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.068062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.068080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.072492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.072550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.072568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.077214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.077272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.077289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.081767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.081854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.081873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.086343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.086401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.086418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.090889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.090949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.090966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.095542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.095602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.095661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.100265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.100323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.100341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.104915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.104974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.104992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.109424] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.109482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.109500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.113971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.114029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.114047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.118490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.118550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.118567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.864 [2024-11-17 01:45:15.123019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.864 [2024-11-17 01:45:15.123079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.864 [2024-11-17 01:45:15.123096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.127506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.127565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.127582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.132274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.132332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.132349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.136815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.136905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.136923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.141455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.141664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.141687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.146271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.146330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.146348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.150924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.150983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.151000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.155427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.155485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.155503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.160054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.160115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.160133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.164585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.164644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.164660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.169201] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.169260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.169277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.173759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.173845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.173864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.178452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.178514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.178532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.183379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.183440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.183458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.188307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.188530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.188555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.193555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.193618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.193636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.198507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.198568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.198586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.203318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.203533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.203557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.208420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.208479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.208497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.212915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.212973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.212991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.217431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.217489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.217506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.222378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.222609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.222633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.227712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.227763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.227783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.233067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.233128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.233163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.238428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.238476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.238494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.243780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.243861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.243884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.865 [2024-11-17 01:45:15.249029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.249079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.865 [2024-11-17 01:45:15.249099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.865 6401.00 IOPS, 800.12 MiB/s [2024-11-17T01:45:15.324Z] [2024-11-17 01:45:15.253990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.865 [2024-11-17 01:45:15.254038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.254057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.257712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.257771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.257789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.261601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.261661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.261678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.265908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.265954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.265972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.269102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.269162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.269196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.273091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.273150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.273167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.276298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.276373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.279636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.279715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.279735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.283205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.283264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.283281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.286589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.286802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.286842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.291110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.291155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.291173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.294318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.294530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.294554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.299177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.299236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.299253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.303888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.303935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.303983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.308507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.308566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.308584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.313213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.313420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.313444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:06.866 [2024-11-17 01:45:15.318507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:06.866 [2024-11-17 01:45:15.318571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:06.866 [2024-11-17 01:45:15.318590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.126 [2024-11-17 01:45:15.323814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.126 [2024-11-17 01:45:15.323877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.126 [2024-11-17 01:45:15.323898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.126 [2024-11-17 01:45:15.328785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.126 [2024-11-17 01:45:15.328871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.328890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.333388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.333447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.333465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.338085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.338144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.338162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.342733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.342793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.342842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.347433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.347690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.347716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.352379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.352437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.352455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.356923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.356980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.356997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.361449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.361508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.361525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.366197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.366256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.366274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.370713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.370933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.370957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.375709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.375773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.375792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.380447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.380507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.380524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.385040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.385098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.385116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.389561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.389620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.389637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.394273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.394481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.394504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.399161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.399253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.399271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.403787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.403878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.403898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.408470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.408530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.408547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.413101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.413159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.413177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.417666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.417726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.417744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.422368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.422428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.422454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.426934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.426993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.427010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.431436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.431495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.431512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.436116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.436158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.436191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.440690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.440916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.440940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.445646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.445706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.127 [2024-11-17 01:45:15.445723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.127 [2024-11-17 01:45:15.450427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.127 [2024-11-17 01:45:15.450486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.450503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.455058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.455118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.455135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.459810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.459871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.459906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.464414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.464473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.464490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.469081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.469141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.469158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.473617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.473676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.473694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.478274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.478333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.478350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.482872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.482929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.482947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.487374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.487433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.487451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.492091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.492150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.492167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.496607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.496666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.496683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.501285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.501343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.501360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.505981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.506040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.506058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.510512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.510570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.510587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.515184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.515243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.515260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.519936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.520038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.520054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.524482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.524537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.524553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.529040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.529095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.529110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.533550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.533606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.533622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.538177] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.538248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.538264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.542749] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.542804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.542834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.547306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.547361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.547377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.552021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.552076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.552091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.556545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.556601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.556617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.561142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.561197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.561213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.565662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.565717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.565734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.570312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.128 [2024-11-17 01:45:15.570367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.128 [2024-11-17 01:45:15.570383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.128 [2024-11-17 01:45:15.574924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.129 [2024-11-17 01:45:15.574980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.129 [2024-11-17 01:45:15.574996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.129 [2024-11-17 01:45:15.579540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.129 [2024-11-17 01:45:15.579583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.129 [2024-11-17 01:45:15.579599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.584834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.584900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.584917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.589721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.589779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.589796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.594416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.594471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.594488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.599050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.599106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.599122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.603720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.603780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.603798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.608437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.608491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.608507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.613015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.613070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.613101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.617656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.617711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.617727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.622367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.622422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.622438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.626982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.627036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.627052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.631548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.631604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.631659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.636295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.636349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.636365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.640925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.640978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.640994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.645516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.645571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.645586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.650160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.650232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.650249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.389 [2024-11-17 01:45:15.654699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.389 [2024-11-17 01:45:15.654755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.389 [2024-11-17 01:45:15.654771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.659288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.659342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.663932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.664003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.664019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.668686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.668743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.668761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.673697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.673754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.673770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.678738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.678796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.678827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.683563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.683659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.683677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.688489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.688546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.688563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.693325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.693381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.693398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.698026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.698082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.698098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.702717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.702774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.702790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.707302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.707357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.707373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.712056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.712110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.712126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.716617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.716673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.716689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.721347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.721403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.721419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.725963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.726020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.726037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.730556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.730612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.730628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.735175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.735230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.735246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.739784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.739853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.739871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.744455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.744510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.744526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.749032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.749087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.749102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.753573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.753628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.753644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.758308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.758363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.758379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.762883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.762937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.762954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.767351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.767406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.767422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.772211] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.772265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.772282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.776854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.776905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.390 [2024-11-17 01:45:15.776921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.390 [2024-11-17 01:45:15.781392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.390 [2024-11-17 01:45:15.781448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.781464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.785984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.786039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.786055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.790610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.790667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.790684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.795306] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.795362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.795378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.799978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.800047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.800064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.804585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.804640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.804656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.809183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.809239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.809254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.813714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.813782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.813814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.818371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.818426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.818442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.822927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.822981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.822998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.827499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.827554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.827570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.832232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.832287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.832303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.836756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.836821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.836838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.391 [2024-11-17 01:45:15.841384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.391 [2024-11-17 01:45:15.841443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.391 [2024-11-17 01:45:15.841460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.846703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.846760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.846777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.851572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.851669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.851689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.856436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.856491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.856507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.861031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.861085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.861101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.865756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.865838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.865856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.870599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.870655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.870672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.875332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.875387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.875403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.880010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.880078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.880094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.884647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.884703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.884719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.889320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.889378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.889394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.894310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.894367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.894383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.899113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.899170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.899202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.904116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.904188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.904205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.909470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.909529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.909546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.914627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.914668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.914685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.919841] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.919884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.919903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.924954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.925012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.925029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.929721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.929777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.929793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.934526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.934583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.934599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.939244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.939300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.939317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.944108] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.944165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.944181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.948863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.948932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.948948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.953502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.652 [2024-11-17 01:45:15.953559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.652 [2024-11-17 01:45:15.953575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.652 [2024-11-17 01:45:15.958188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.958245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.958262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.962913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.962969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.962985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.967535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.967594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.967618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.972218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.972275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.972292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.976987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.977043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.977059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.981619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.981676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.981693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.986401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.986458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.986474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.991150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.991208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.991225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:15.995914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:15.996000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:15.996031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.000786] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.000855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.000871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.005411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.005467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.005484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.010025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.010081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.010097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.014593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.014659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.014675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.019320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.019377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.019393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.024305] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.024394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.024412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.029064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.029121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.029138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.033856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.033913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.033929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.038475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.038531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.038548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.043131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.043202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.043218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.047885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.047942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.047975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.052823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.052890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.052907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.057496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.057553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.057569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.062103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.062159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.062175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.066717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.066774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.066790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.071347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.071403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.071420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.076319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.076375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.076391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.080929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.653 [2024-11-17 01:45:16.080984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.653 [2024-11-17 01:45:16.081001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.653 [2024-11-17 01:45:16.085539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.654 [2024-11-17 01:45:16.085594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.654 [2024-11-17 01:45:16.085610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.654 [2024-11-17 01:45:16.090217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.654 [2024-11-17 01:45:16.090273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.654 [2024-11-17 01:45:16.090289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.654 [2024-11-17 01:45:16.095071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.654 [2024-11-17 01:45:16.095129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.654 [2024-11-17 01:45:16.095146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.654 [2024-11-17 01:45:16.100015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.654 [2024-11-17 01:45:16.100071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.654 [2024-11-17 01:45:16.100087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.654 [2024-11-17 01:45:16.104954] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.654 [2024-11-17 01:45:16.105026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.654 [2024-11-17 01:45:16.105044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.110240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.110296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.110312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.115351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.115408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.115425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.120354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.120411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.120427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.125031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.125087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.125104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.129619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.129675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.129692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.134392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.134449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.134465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.139092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.139149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.139166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.144060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.144114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.144130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.148661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.148718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.148735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.153404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.153460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.153477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.158303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.158362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.158380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.163598] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.163681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.163701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.168724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.168781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.168798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.173817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.173887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.173905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.178721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.178778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.178794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.183604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.183702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.183720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.188430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.188487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.188503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.193171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.193259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.193277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.198128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.198186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.914 [2024-11-17 01:45:16.198217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.914 [2024-11-17 01:45:16.202894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.914 [2024-11-17 01:45:16.202947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.202964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.207606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.207702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.207719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.212518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.212574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.212590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.217103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.217158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.217174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.221686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.221741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.221774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.226233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.226288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.226304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.230793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.230859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.230876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.235324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.235380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.235396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.239878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.239920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.239951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:07.915 [2024-11-17 01:45:16.244871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.244939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.244957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:07.915 6503.50 IOPS, 812.94 MiB/s [2024-11-17T01:45:16.374Z] [2024-11-17 01:45:16.251468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x61500002b280) 00:25:07.915 [2024-11-17 01:45:16.251503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:07.915 [2024-11-17 01:45:16.251520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:07.915 00:25:07.915 Latency(us) 00:25:07.915 [2024-11-17T01:45:16.374Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.915 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:07.915 nvme0n1 : 2.00 6503.06 812.88 0.00 0.00 2456.31 1079.85 6613.18 00:25:07.915 [2024-11-17T01:45:16.374Z] =================================================================================================================== 00:25:07.915 [2024-11-17T01:45:16.374Z] Total : 6503.06 812.88 0.00 0.00 2456.31 1079.85 6613.18 00:25:07.915 { 00:25:07.915 "results": [ 00:25:07.915 { 00:25:07.915 "job": "nvme0n1", 00:25:07.915 "core_mask": "0x2", 00:25:07.915 "workload": "randread", 00:25:07.915 "status": "finished", 00:25:07.915 "queue_depth": 16, 00:25:07.915 "io_size": 131072, 00:25:07.915 "runtime": 2.002596, 00:25:07.915 "iops": 6503.059029379865, 00:25:07.915 "mibps": 812.8823786724831, 00:25:07.915 "io_failed": 0, 00:25:07.915 "io_timeout": 0, 00:25:07.915 "avg_latency_us": 2456.308178676886, 00:25:07.915 "min_latency_us": 1079.8545454545454, 00:25:07.915 "max_latency_us": 6613.178181818182 00:25:07.915 } 00:25:07.915 ], 00:25:07.915 "core_count": 1 00:25:07.915 } 00:25:07.915 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:07.915 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:07.915 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:07.915 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:07.915 | .driver_specific 00:25:07.915 | .nvme_error 00:25:07.915 | .status_code 00:25:07.915 | .command_transient_transport_error' 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 421 > 0 )) 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86402 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86402 ']' 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86402 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86402 00:25:08.174 killing process with pid 86402 00:25:08.174 Received shutdown signal, test time was about 2.000000 seconds 00:25:08.174 00:25:08.174 Latency(us) 00:25:08.174 [2024-11-17T01:45:16.633Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:08.174 [2024-11-17T01:45:16.633Z] =================================================================================================================== 00:25:08.174 [2024-11-17T01:45:16.633Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86402' 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86402 00:25:08.174 01:45:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86402 00:25:09.112 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86468 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86468 /var/tmp/bperf.sock 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86468 ']' 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:09.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:09.113 01:45:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:09.113 [2024-11-17 01:45:17.478104] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:09.113 [2024-11-17 01:45:17.478316] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86468 ] 00:25:09.372 [2024-11-17 01:45:17.659616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.372 [2024-11-17 01:45:17.740454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:09.631 [2024-11-17 01:45:17.885616] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.199 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:10.767 nvme0n1 00:25:10.767 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:10.767 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.767 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:10.767 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.767 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:10.767 01:45:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:10.767 Running I/O for 2 seconds... 00:25:10.767 [2024-11-17 01:45:19.062797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:25:10.767 [2024-11-17 01:45:19.064342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.767 [2024-11-17 01:45:19.064427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:10.767 [2024-11-17 01:45:19.079369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:25:10.767 [2024-11-17 01:45:19.080861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.767 [2024-11-17 01:45:19.080943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.767 [2024-11-17 01:45:19.102586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bff3c8 00:25:10.767 [2024-11-17 01:45:19.105498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.767 [2024-11-17 01:45:19.105560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:25:10.767 [2024-11-17 01:45:19.120262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfe720 00:25:10.767 [2024-11-17 01:45:19.123044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:15465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.767 [2024-11-17 01:45:19.123100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:25:10.767 [2024-11-17 01:45:19.137061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfdeb0 00:25:10.767 [2024-11-17 01:45:19.139597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9204 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.767 [2024-11-17 01:45:19.139674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:25:10.767 [2024-11-17 01:45:19.153331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfd640 00:25:10.767 [2024-11-17 01:45:19.156131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:18120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.767 [2024-11-17 01:45:19.156185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:25:10.767 [2024-11-17 01:45:19.169777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfcdd0 00:25:10.768 [2024-11-17 01:45:19.172449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:9514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.768 [2024-11-17 01:45:19.172511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:25:10.768 [2024-11-17 01:45:19.185855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfc560 00:25:10.768 [2024-11-17 01:45:19.188432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.768 [2024-11-17 01:45:19.188490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:25:10.768 [2024-11-17 01:45:19.201950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfbcf0 00:25:10.768 [2024-11-17 01:45:19.204524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.768 [2024-11-17 01:45:19.204576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:25:10.768 [2024-11-17 01:45:19.218080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfb480 00:25:10.768 [2024-11-17 01:45:19.220691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:2572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:10.768 [2024-11-17 01:45:19.220762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.235405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfac10 00:25:11.026 [2024-11-17 01:45:19.238062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.238122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.251723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bfa3a0 00:25:11.026 [2024-11-17 01:45:19.254267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:23198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.254324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.267946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf9b30 00:25:11.026 [2024-11-17 01:45:19.270336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.270390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.284193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf92c0 00:25:11.026 [2024-11-17 01:45:19.286613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:3064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.286666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.300376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:25:11.026 [2024-11-17 01:45:19.302890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.302942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.316901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:25:11.026 [2024-11-17 01:45:19.319478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:4432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.319537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.336160] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:25:11.026 [2024-11-17 01:45:19.338934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:9844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.338983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.354894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:25:11.026 [2024-11-17 01:45:19.357444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.357503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.372514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:25:11.026 [2024-11-17 01:45:19.374932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:16615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.374985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.388900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:25:11.026 [2024-11-17 01:45:19.391163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.391216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.405066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:25:11.026 [2024-11-17 01:45:19.407320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:10262 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.407383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.421555] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:25:11.026 [2024-11-17 01:45:19.423910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.423953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.437770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:25:11.026 [2024-11-17 01:45:19.440072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:19099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.440125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:25:11.026 [2024-11-17 01:45:19.454095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:25:11.026 [2024-11-17 01:45:19.456389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21593 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.026 [2024-11-17 01:45:19.456442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:25:11.027 [2024-11-17 01:45:19.470580] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:25:11.027 [2024-11-17 01:45:19.472914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:2051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.027 [2024-11-17 01:45:19.472967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.488062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:25:11.286 [2024-11-17 01:45:19.490275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:18 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.490335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.504623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:25:11.286 [2024-11-17 01:45:19.506866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:24968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.506922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.521133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:25:11.286 [2024-11-17 01:45:19.523343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.523396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.537499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:25:11.286 [2024-11-17 01:45:19.539702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.539757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.553743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:25:11.286 [2024-11-17 01:45:19.555888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:4536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.555966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.570056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:25:11.286 [2024-11-17 01:45:19.572212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:13094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.572270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.586185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:25:11.286 [2024-11-17 01:45:19.588315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9082 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.588368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.603600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:25:11.286 [2024-11-17 01:45:19.606004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.606058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.621241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:25:11.286 [2024-11-17 01:45:19.623333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.623387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.638394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:25:11.286 [2024-11-17 01:45:19.640602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8538 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.640663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.655548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:25:11.286 [2024-11-17 01:45:19.657708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:13452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.657767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.672023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:25:11.286 [2024-11-17 01:45:19.674034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.674091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.688262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:25:11.286 [2024-11-17 01:45:19.690257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.690310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.704729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:25:11.286 [2024-11-17 01:45:19.706685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:6265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.706738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.721251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:25:11.286 [2024-11-17 01:45:19.723220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.723268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:25:11.286 [2024-11-17 01:45:19.739006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:25:11.286 [2024-11-17 01:45:19.741397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.286 [2024-11-17 01:45:19.741467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:25:11.546 [2024-11-17 01:45:19.758860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:25:11.546 [2024-11-17 01:45:19.760919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.546 [2024-11-17 01:45:19.760979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:25:11.546 [2024-11-17 01:45:19.776504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:25:11.546 [2024-11-17 01:45:19.778453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:21834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.546 [2024-11-17 01:45:19.778510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:25:11.546 [2024-11-17 01:45:19.793494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:25:11.546 [2024-11-17 01:45:19.795601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.546 [2024-11-17 01:45:19.795698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:25:11.546 [2024-11-17 01:45:19.810730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:25:11.546 [2024-11-17 01:45:19.812702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:9331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.546 [2024-11-17 01:45:19.812756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:25:11.546 [2024-11-17 01:45:19.827874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:25:11.546 [2024-11-17 01:45:19.829761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.546 [2024-11-17 01:45:19.829823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:25:11.546 [2024-11-17 01:45:19.845084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:25:11.546 [2024-11-17 01:45:19.846924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.846987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.862271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:25:11.547 [2024-11-17 01:45:19.864241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:14384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.864301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.879586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:25:11.547 [2024-11-17 01:45:19.881553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.881612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.896949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:25:11.547 [2024-11-17 01:45:19.898818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.898871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.914170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:25:11.547 [2024-11-17 01:45:19.916052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:12612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.916105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.931320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:25:11.547 [2024-11-17 01:45:19.933187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.933258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.948799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:25:11.547 [2024-11-17 01:45:19.950618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:15846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.950679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.966572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:25:11.547 [2024-11-17 01:45:19.968449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16755 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.968510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.982988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:25:11.547 [2024-11-17 01:45:19.984688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:19.984745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:11.547 [2024-11-17 01:45:19.999474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:25:11.547 [2024-11-17 01:45:20.001408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.547 [2024-11-17 01:45:20.001478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:25:11.806 [2024-11-17 01:45:20.019133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:25:11.806 [2024-11-17 01:45:20.021295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.806 [2024-11-17 01:45:20.021351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:11.806 [2024-11-17 01:45:20.039501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:25:11.806 [2024-11-17 01:45:20.041298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:4378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.806 [2024-11-17 01:45:20.041355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:11.806 14802.00 IOPS, 57.82 MiB/s [2024-11-17T01:45:20.265Z] [2024-11-17 01:45:20.058158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:25:11.806 [2024-11-17 01:45:20.059784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.059840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.074425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:25:11.807 [2024-11-17 01:45:20.076095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.076165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.092111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:25:11.807 [2024-11-17 01:45:20.093741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:21667 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.093804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.109223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:25:11.807 [2024-11-17 01:45:20.110764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:23802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.110831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.125596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:25:11.807 [2024-11-17 01:45:20.127136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.127191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.141695] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:25:11.807 [2024-11-17 01:45:20.143220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.143291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.158069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:25:11.807 [2024-11-17 01:45:20.159508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.159568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.180814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bddc00 00:25:11.807 [2024-11-17 01:45:20.183411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.183464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.196934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bde470 00:25:11.807 [2024-11-17 01:45:20.199477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:19462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.199542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.213248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdece0 00:25:11.807 [2024-11-17 01:45:20.215840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.215899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.229583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdf550 00:25:11.807 [2024-11-17 01:45:20.232276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.232330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.246007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bdfdc0 00:25:11.807 [2024-11-17 01:45:20.248612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:11.807 [2024-11-17 01:45:20.248665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:25:11.807 [2024-11-17 01:45:20.263011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0630 00:25:12.066 [2024-11-17 01:45:20.265981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:20232 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.066 [2024-11-17 01:45:20.266035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:25:12.066 [2024-11-17 01:45:20.280200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be0ea0 00:25:12.066 [2024-11-17 01:45:20.282701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.066 [2024-11-17 01:45:20.282763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:25:12.066 [2024-11-17 01:45:20.296535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1710 00:25:12.066 [2024-11-17 01:45:20.299078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.066 [2024-11-17 01:45:20.299135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:25:12.066 [2024-11-17 01:45:20.312961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be1f80 00:25:12.066 [2024-11-17 01:45:20.315426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.066 [2024-11-17 01:45:20.315479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:25:12.066 [2024-11-17 01:45:20.329623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be27f0 00:25:12.066 [2024-11-17 01:45:20.332224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:24998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.066 [2024-11-17 01:45:20.332276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:25:12.066 [2024-11-17 01:45:20.347041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be3060 00:25:12.066 [2024-11-17 01:45:20.349818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.066 [2024-11-17 01:45:20.349883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:25:12.066 [2024-11-17 01:45:20.366624] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be38d0 00:25:12.066 [2024-11-17 01:45:20.369494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.369566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.384442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be4140 00:25:12.067 [2024-11-17 01:45:20.386889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.386949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.400988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be49b0 00:25:12.067 [2024-11-17 01:45:20.403364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:3735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.403421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.417349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5220 00:25:12.067 [2024-11-17 01:45:20.419711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:21193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.419774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.434104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be5a90 00:25:12.067 [2024-11-17 01:45:20.436565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:2604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.436617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.450680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6300 00:25:12.067 [2024-11-17 01:45:20.453238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:4696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.453291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.467404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be6b70 00:25:12.067 [2024-11-17 01:45:20.469883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:19712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.469945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.484103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be73e0 00:25:12.067 [2024-11-17 01:45:20.486407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:2443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.486464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.500717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be7c50 00:25:12.067 [2024-11-17 01:45:20.503066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.503140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:25:12.067 [2024-11-17 01:45:20.517396] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be84c0 00:25:12.067 [2024-11-17 01:45:20.519827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:5907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.067 [2024-11-17 01:45:20.519881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:25:12.326 [2024-11-17 01:45:20.534905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be8d30 00:25:12.326 [2024-11-17 01:45:20.537299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.326 [2024-11-17 01:45:20.537353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:25:12.326 [2024-11-17 01:45:20.551520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be95a0 00:25:12.326 [2024-11-17 01:45:20.553966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:22749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.326 [2024-11-17 01:45:20.554021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:25:12.326 [2024-11-17 01:45:20.568056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016be9e10 00:25:12.326 [2024-11-17 01:45:20.570247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:4354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.326 [2024-11-17 01:45:20.570309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.585713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bea680 00:25:12.327 [2024-11-17 01:45:20.588150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:12105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.588215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.602990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beaef0 00:25:12.327 [2024-11-17 01:45:20.605361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.605420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.619585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beb760 00:25:12.327 [2024-11-17 01:45:20.621922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.621974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.635974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bebfd0 00:25:12.327 [2024-11-17 01:45:20.638132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.638185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.652182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bec840 00:25:12.327 [2024-11-17 01:45:20.654335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.654395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.668554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed0b0 00:25:12.327 [2024-11-17 01:45:20.670697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:7442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.670755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.684889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bed920 00:25:12.327 [2024-11-17 01:45:20.686948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.687001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.701278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bee190 00:25:12.327 [2024-11-17 01:45:20.703343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.703396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.717677] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016beea00 00:25:12.327 [2024-11-17 01:45:20.719795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.719858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.734060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bef270 00:25:12.327 [2024-11-17 01:45:20.736164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:3985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.736224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.750350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016befae0 00:25:12.327 [2024-11-17 01:45:20.752462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:7063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.752519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.766786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0350 00:25:12.327 [2024-11-17 01:45:20.768842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.327 [2024-11-17 01:45:20.768905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:25:12.327 [2024-11-17 01:45:20.783742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf0bc0 00:25:12.587 [2024-11-17 01:45:20.785948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:24414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.786001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.800673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1430 00:25:12.587 [2024-11-17 01:45:20.802685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:11221 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.802736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.817349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf1ca0 00:25:12.587 [2024-11-17 01:45:20.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:3129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.819345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.833759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2510 00:25:12.587 [2024-11-17 01:45:20.835637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.835712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.850148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf2d80 00:25:12.587 [2024-11-17 01:45:20.852093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:18624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.852146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.866498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf35f0 00:25:12.587 [2024-11-17 01:45:20.868477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.868529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.883112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf3e60 00:25:12.587 [2024-11-17 01:45:20.885017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.885071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.899334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf46d0 00:25:12.587 [2024-11-17 01:45:20.901313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.901373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.915756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf4f40 00:25:12.587 [2024-11-17 01:45:20.917622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.917680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.932157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf57b0 00:25:12.587 [2024-11-17 01:45:20.933959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.934013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.948460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6020 00:25:12.587 [2024-11-17 01:45:20.950301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:19481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.950353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.965737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf6890 00:25:12.587 [2024-11-17 01:45:20.967679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22734 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.967722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:20.984319] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7100 00:25:12.587 [2024-11-17 01:45:20.986324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:24412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:20.986392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:21.003042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf7970 00:25:12.587 [2024-11-17 01:45:21.004970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:8987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:21.005030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:21.020705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf81e0 00:25:12.587 [2024-11-17 01:45:21.022494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:12231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:21.022553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:12.587 [2024-11-17 01:45:21.038337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x200016bf8a50 00:25:12.587 [2024-11-17 01:45:21.040275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:5685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:25:12.587 [2024-11-17 01:45:21.040351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:12.846 14928.00 IOPS, 58.31 MiB/s 00:25:12.847 Latency(us) 00:25:12.847 [2024-11-17T01:45:21.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:12.847 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:12.847 nvme0n1 : 2.01 14929.53 58.32 0.00 0.00 8565.89 7626.01 30384.87 00:25:12.847 [2024-11-17T01:45:21.306Z] =================================================================================================================== 00:25:12.847 [2024-11-17T01:45:21.306Z] Total : 14929.53 58.32 0.00 0.00 8565.89 7626.01 30384.87 00:25:12.847 { 00:25:12.847 "results": [ 00:25:12.847 { 00:25:12.847 "job": "nvme0n1", 00:25:12.847 "core_mask": "0x2", 00:25:12.847 "workload": "randwrite", 00:25:12.847 "status": "finished", 00:25:12.847 "queue_depth": 128, 00:25:12.847 "io_size": 4096, 00:25:12.847 "runtime": 2.008368, 00:25:12.847 "iops": 14929.534826286816, 00:25:12.847 "mibps": 58.31849541518287, 00:25:12.847 "io_failed": 0, 00:25:12.847 "io_timeout": 0, 00:25:12.847 "avg_latency_us": 8565.885677694769, 00:25:12.847 "min_latency_us": 7626.007272727273, 00:25:12.847 "max_latency_us": 30384.872727272726 00:25:12.847 } 00:25:12.847 ], 00:25:12.847 "core_count": 1 00:25:12.847 } 00:25:12.847 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:12.847 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:12.847 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:12.847 | .driver_specific 00:25:12.847 | .nvme_error 00:25:12.847 | .status_code 00:25:12.847 | .command_transient_transport_error' 00:25:12.847 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 117 > 0 )) 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86468 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86468 ']' 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86468 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86468 00:25:13.106 killing process with pid 86468 00:25:13.106 Received shutdown signal, test time was about 2.000000 seconds 00:25:13.106 00:25:13.106 Latency(us) 00:25:13.106 [2024-11-17T01:45:21.565Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:13.106 [2024-11-17T01:45:21.565Z] =================================================================================================================== 00:25:13.106 [2024-11-17T01:45:21.565Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86468' 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86468 00:25:13.106 01:45:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86468 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=86525 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 86525 /var/tmp/bperf.sock 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 86525 ']' 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:13.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:13.674 01:45:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:13.933 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:13.933 Zero copy mechanism will not be used. 00:25:13.933 [2024-11-17 01:45:22.216571] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:13.933 [2024-11-17 01:45:22.216744] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86525 ] 00:25:14.192 [2024-11-17 01:45:22.397319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:14.192 [2024-11-17 01:45:22.485521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.192 [2024-11-17 01:45:22.631123] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:14.760 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.760 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:25:14.760 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:14.760 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:15.020 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:15.020 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.020 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.020 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.020 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.020 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:15.279 nvme0n1 00:25:15.279 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:15.279 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.279 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:15.279 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.279 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:15.279 01:45:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:15.540 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:15.540 Zero copy mechanism will not be used. 00:25:15.540 Running I/O for 2 seconds... 00:25:15.540 [2024-11-17 01:45:23.766148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.766266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.766307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.772296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.772398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.772435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.778260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.778383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.778417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.783958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.784106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.784135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.789674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.789770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.789806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.795303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.795413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.795449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.800929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.801046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.801076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.806474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.806588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.806624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.812238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.812350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.812385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.817955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.818078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.818107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.823562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.823715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.823744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.829519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.829616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.829653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.835269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.835367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.835403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.841154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.841290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.841318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.846705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.846799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.846864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.852522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.852626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.852661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.858155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.858278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.858306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.863788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.863918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.863962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.869532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.869627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.540 [2024-11-17 01:45:23.869661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.540 [2024-11-17 01:45:23.875359] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.540 [2024-11-17 01:45:23.875453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.875488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.881076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.881234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.881263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.886675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.886768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.886801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.892447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.892540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.892575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.898205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.898323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.898351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.903965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.904097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.904125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.909631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.909744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.909790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.915362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.915454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.915493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.921132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.921280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.921308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.926769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.926885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.926918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.932442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.932554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.932590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.938147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.938263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.938291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.943797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.943920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.943964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.949612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.949712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.949749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.955412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.955530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.955567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.961238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.961353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.961380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.966856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.966969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.967005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.972645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.972780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.972847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.978352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.978471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.978499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.984010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.984139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.984167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.989667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.989784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.989834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.541 [2024-11-17 01:45:23.996101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.541 [2024-11-17 01:45:23.996245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.541 [2024-11-17 01:45:23.996298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.002274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.002395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.002425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.008476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.008594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.008640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.014660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.014789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.014827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.020828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.020963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.020992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.026950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.027070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.027105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.032669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.032785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.032852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.038446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.038547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.038575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.044206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.044327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.044354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.049858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.049977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.050013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.055454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.055554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.055588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.061283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.061399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.061427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.066951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.067049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.067082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.072767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.072916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.072953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.078371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.078491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.078519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.084149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.084256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.084284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.089900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.802 [2024-11-17 01:45:24.090006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.802 [2024-11-17 01:45:24.090041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.802 [2024-11-17 01:45:24.095515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.095635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.095690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.101322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.101445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.101473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.106923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.107025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.107058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.112651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.112745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.112780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.118361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.118465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.118493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.124146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.124246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.124274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.129745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.129864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.129902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.135316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.135433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.135467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.140946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.141069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.141097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.146526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.146648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.146676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.152410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.152523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.152558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.158047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.158170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.158198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.163600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.163769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.163809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.169367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.169481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.169516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.174946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.175046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.175081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.180690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.180791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.180834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.186322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.186445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.186473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.192031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.192157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.192199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.197631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.197741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.197768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.203317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.203443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.203472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.208997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.209092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.209127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.214598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.214708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.214743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.220373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.220492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.220520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.225998] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.226120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.226148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.231521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.231639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.231690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.237295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.237396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.803 [2024-11-17 01:45:24.237424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:15.803 [2024-11-17 01:45:24.242910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.803 [2024-11-17 01:45:24.243028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.804 [2024-11-17 01:45:24.243055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:15.804 [2024-11-17 01:45:24.248533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.804 [2024-11-17 01:45:24.248638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.804 [2024-11-17 01:45:24.248675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:15.804 [2024-11-17 01:45:24.254080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:15.804 [2024-11-17 01:45:24.254226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.804 [2024-11-17 01:45:24.254264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.260486] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.260617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.260645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.266514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.266616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.266643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.272451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.272562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.272599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.278082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.278184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.278214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.283714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.283878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.283909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.289427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.289540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.289574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.295015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.295125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.295159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.300626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.300752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.306236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.306340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.306377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.311858] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.311978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.312029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.317542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.317664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.317691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.323193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.323307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.323335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.329055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.329177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.329213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.334526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.334641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.334675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.340282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.340384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.340412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.345864] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.345981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.346008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.351556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.351695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.351737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.357371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.357489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.357525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.363031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.363137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.363165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.368643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.368747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.368798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.374268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.374363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.374399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.379847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.379956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.379985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.385400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.385517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.385544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.391001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.391104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.391138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.396742] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.064 [2024-11-17 01:45:24.396897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.064 [2024-11-17 01:45:24.396933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.064 [2024-11-17 01:45:24.402369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.402490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.402518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.408079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.408215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.408250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.413775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.413881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.413919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.419332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.419438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.419466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.425027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.425131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.425159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.431204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.431325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.431373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.437552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.437655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.437692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.444357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.444497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.444526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.450755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.450911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.450952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.457050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.457167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.457249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.463465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.463568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.463596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.469653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.469749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.469784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.475747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.475881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.475922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.481466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.481615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.487156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.487303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.487330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.493006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.493121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.493158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.498772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.498908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.498937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.504923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.505050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.505079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.510927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.511040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.511092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.065 [2024-11-17 01:45:24.517266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.065 [2024-11-17 01:45:24.517404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.065 [2024-11-17 01:45:24.517442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.325 [2024-11-17 01:45:24.523448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.523573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.523601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.529543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.529640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.529675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.535281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.535383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.535417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.541090] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.541204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.541232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.546682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.546819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.546877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.552497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.552601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.552636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.558125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.558234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.558269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.563911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.564052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.564080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.569554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.569648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.569683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.575258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.575373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.575414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.580962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.581063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.581090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.586615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.586727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.586755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.592352] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.592464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.592499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.598004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.598118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.598156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.603554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.603695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.603724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.609258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.609355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.609388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.614949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.615045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.615080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.620744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.620894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.620922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.626366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.626466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.626494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.632120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.632230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.632267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.637726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.637835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.637883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.643364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.643476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.643504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.649049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.649155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.649187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.654692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.654818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.654870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.660377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.660491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.326 [2024-11-17 01:45:24.660520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.326 [2024-11-17 01:45:24.666031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.326 [2024-11-17 01:45:24.666151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.666178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.671670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.671787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.671839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.677258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.677365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.677402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.682870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.682987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.683014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.688487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.688607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.688635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.694082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.694196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.694232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.699686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.699801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.699846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.705428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.705533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.705560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.710961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.711068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.711104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.716701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.716811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.716846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.722218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.722318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.722345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.727955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.728073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.728101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.733628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.733743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.733780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.739387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.739510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.739555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.745771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.745906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.745936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.752013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.752135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.752163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.758499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.758601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.758630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.327 5362.00 IOPS, 670.25 MiB/s [2024-11-17T01:45:24.786Z] [2024-11-17 01:45:24.766547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.766648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.766678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.773203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.773342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.773369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.327 [2024-11-17 01:45:24.779662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.327 [2024-11-17 01:45:24.779785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.327 [2024-11-17 01:45:24.779815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.786062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.786176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.786204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.792369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.792497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.792543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.798254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.798377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.798405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.804107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.804204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.804232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.809848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.809964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.809993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.815928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.816039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.816067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.821660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.821775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.821803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.827583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.827705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.827734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.833488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.833587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.833615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.839316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.839412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.839440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.845068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.845163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.845192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.850805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.850930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.850959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.856717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.856833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.856861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.862566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.862663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.862691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.868394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.868498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.868526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.874299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.874400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.874428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.880395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.880495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.880524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.886200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.886297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.886325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.891884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.892018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.892046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.897794] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.897975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.898004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.903699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.903816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.588 [2024-11-17 01:45:24.903876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.588 [2024-11-17 01:45:24.909474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.588 [2024-11-17 01:45:24.909590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.909621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.915263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.915382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.915409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.921368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.921483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.921512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.927170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.927286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.927320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.933042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.933157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.933186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.938689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.938797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.938864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.944829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.944937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.944965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.950606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.950722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.950751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.956592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.956693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.956721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.962680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.962796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.962840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.968672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.968774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.968802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.974493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.974609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.974640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.980364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.980489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.980517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.986493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.986590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.986618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.992828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.992957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.992986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:24.998981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:24.999121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:24.999150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.005136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.005258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.005287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.010880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.010972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.011000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.016710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.016803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.016831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.022306] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.022402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.022431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.028304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.028419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.028446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.033904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.034017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.034045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.589 [2024-11-17 01:45:25.039510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.589 [2024-11-17 01:45:25.039605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.589 [2024-11-17 01:45:25.039675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.849 [2024-11-17 01:45:25.045954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.849 [2024-11-17 01:45:25.046069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.849 [2024-11-17 01:45:25.046099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.849 [2024-11-17 01:45:25.052198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.849 [2024-11-17 01:45:25.052312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.849 [2024-11-17 01:45:25.052340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.849 [2024-11-17 01:45:25.057706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.057820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.057847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.063309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.063423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.063451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.069059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.069182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.069208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.074656] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.074750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.074778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.080328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.080437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.080464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.085963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.086058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.086086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.091524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.091682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.091711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.097238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.097357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.097385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.102841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.102941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.102969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.108521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.108635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.108663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.114163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.114284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.114319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.119927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.120054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.120083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.125568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.125668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.125695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.131233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.131344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.131372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.136963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.137078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.137108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.142572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.142665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.142694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.148261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.148357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.148385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.153929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.154043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.154071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.159450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.159541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.159569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.165339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.165445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.165472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.170964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.171084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.171112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.176690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.176816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.176861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.182459] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.182560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.182588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.188207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.188321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.188348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.193908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.194000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.194027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.199430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.199545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.850 [2024-11-17 01:45:25.199572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.850 [2024-11-17 01:45:25.205255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.850 [2024-11-17 01:45:25.205346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.205374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.210888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.210981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.211009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.216713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.216828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.216856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.222299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.222400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.222427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.227983] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.228119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.228147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.233623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.233716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.233744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.239292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.239393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.239421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.245029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.245127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.245156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.250642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.250755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.250783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.256397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.256490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.256518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.262155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.262284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.262311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.267857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.268005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.268032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.273553] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.273652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.273680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.279294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.279408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.279435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.285007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.285104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.285132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.290577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.290690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.290716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.296313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.296428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.296455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:16.851 [2024-11-17 01:45:25.301992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:16.851 [2024-11-17 01:45:25.302127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.851 [2024-11-17 01:45:25.302158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.308515] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.308615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.111 [2024-11-17 01:45:25.308642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.314546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.314655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.111 [2024-11-17 01:45:25.314682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.320499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.320598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.111 [2024-11-17 01:45:25.320626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.326171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.326279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.111 [2024-11-17 01:45:25.326307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.331892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.332048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.111 [2024-11-17 01:45:25.332075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.337513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.337622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.111 [2024-11-17 01:45:25.337650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.111 [2024-11-17 01:45:25.343226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.111 [2024-11-17 01:45:25.343339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.343366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.348922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.349031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.349059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.354529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.354626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.354653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.360373] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.360503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.360530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.366096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.366209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.366237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.371752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.371891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.371919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.377449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.377541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.377568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.383089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.383218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.383244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.388779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.388884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.388911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.394354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.394467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.394494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.400097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.400210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.400237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.405626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.405721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.405748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.411364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.411471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.411499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.417212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.417320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.417348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.422867] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.422961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.422990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.428547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.428652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.428678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.434148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.434240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.434267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.439832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.439965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.440008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.445437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.445535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.445562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.451696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.451784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.451829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.457986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.458147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.458222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.464622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.464943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.464974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.471786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.471916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.471948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.478226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.478367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.478395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.484852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.485188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.485233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.491559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.491730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.491761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.112 [2024-11-17 01:45:25.497591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.112 [2024-11-17 01:45:25.497711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.112 [2024-11-17 01:45:25.497741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.503727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.503877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.503909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.509655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.509752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.509780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.515383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.515500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.515528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.521144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.521279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.521307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.526780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.526924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.526952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.532518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.532779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.532824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.538490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.538609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.538637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.544223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.544481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.544511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.550317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.550606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.550854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.556410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.556691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.556890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.113 [2024-11-17 01:45:25.562456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.113 [2024-11-17 01:45:25.562723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.113 [2024-11-17 01:45:25.562914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.569232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.569552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.569788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.575670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.576318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.581782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.582089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.582316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.587765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.588101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.588132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.593763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.593896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.593925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.599513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.599815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.599847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.605564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.605665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.605694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.611437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.611768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.611800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.617500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.617615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.617642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.623240] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.623511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.623541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.629339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.629438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.629466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.635001] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.635118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.635145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.640691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.640790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.640835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.646387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.646645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.646674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.652482] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.652582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.652610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.658194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.658311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.658339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.663962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.664080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.664107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.669679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.669974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.670002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.675591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.675734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.675763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.681374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.681652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.681681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.687308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.687406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.687433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.693048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.693189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.693216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.374 [2024-11-17 01:45:25.698607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.374 [2024-11-17 01:45:25.698704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.374 [2024-11-17 01:45:25.698731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.704495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.704769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.704798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.710489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.710586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.710613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.716466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.716742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.716772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.722464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.722564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.722591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.728271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.728542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.728571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.734288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.734578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.734900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.740238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.740516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.740672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.746141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.746441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.746640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.752171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.752464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.752638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.758046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 [2024-11-17 01:45:25.758341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.758513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:25:17.375 [2024-11-17 01:45:25.764238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x200016bff3c8 00:25:17.375 5316.50 IOPS, 664.56 MiB/s [2024-11-17T01:45:25.834Z] [2024-11-17 01:45:25.766028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:17.375 [2024-11-17 01:45:25.766087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:25:17.375 00:25:17.375 Latency(us) 00:25:17.375 [2024-11-17T01:45:25.834Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.375 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:17.375 nvme0n1 : 2.00 5315.15 664.39 0.00 0.00 3002.93 2189.50 8162.21 00:25:17.375 [2024-11-17T01:45:25.834Z] =================================================================================================================== 00:25:17.375 [2024-11-17T01:45:25.834Z] Total : 5315.15 664.39 0.00 0.00 3002.93 2189.50 8162.21 00:25:17.375 { 00:25:17.375 "results": [ 00:25:17.375 { 00:25:17.375 "job": "nvme0n1", 00:25:17.375 "core_mask": "0x2", 00:25:17.375 "workload": "randwrite", 00:25:17.375 "status": "finished", 00:25:17.375 "queue_depth": 16, 00:25:17.375 "io_size": 131072, 00:25:17.375 "runtime": 2.00352, 00:25:17.375 "iops": 5315.1453441942185, 00:25:17.375 "mibps": 664.3931680242773, 00:25:17.375 "io_failed": 0, 00:25:17.375 "io_timeout": 0, 00:25:17.375 "avg_latency_us": 3002.9342207121454, 00:25:17.375 "min_latency_us": 2189.498181818182, 00:25:17.375 "max_latency_us": 8162.210909090909 00:25:17.375 } 00:25:17.375 ], 00:25:17.375 "core_count": 1 00:25:17.375 } 00:25:17.375 01:45:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:17.375 01:45:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:17.375 01:45:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:17.375 | .driver_specific 00:25:17.375 | .nvme_error 00:25:17.375 | .status_code 00:25:17.375 | .command_transient_transport_error' 00:25:17.375 01:45:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:17.971 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 344 > 0 )) 00:25:17.971 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 86525 00:25:17.971 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86525 ']' 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86525 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86525 00:25:17.972 killing process with pid 86525 00:25:17.972 Received shutdown signal, test time was about 2.000000 seconds 00:25:17.972 00:25:17.972 Latency(us) 00:25:17.972 [2024-11-17T01:45:26.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:17.972 [2024-11-17T01:45:26.431Z] =================================================================================================================== 00:25:17.972 [2024-11-17T01:45:26.431Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86525' 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86525 00:25:17.972 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86525 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 86303 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 86303 ']' 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 86303 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86303 00:25:18.540 killing process with pid 86303 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86303' 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 86303 00:25:18.540 01:45:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 86303 00:25:19.476 00:25:19.476 real 0m21.243s 00:25:19.476 user 0m40.705s 00:25:19.476 sys 0m4.534s 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:25:19.476 ************************************ 00:25:19.476 END TEST nvmf_digest_error 00:25:19.476 ************************************ 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:19.476 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:19.476 rmmod nvme_tcp 00:25:19.476 rmmod nvme_fabrics 00:25:19.736 rmmod nvme_keyring 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 86303 ']' 00:25:19.736 Process with pid 86303 is not found 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 86303 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 86303 ']' 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 86303 00:25:19.736 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (86303) - No such process 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 86303 is not found' 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:25:19.736 01:45:27 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@246 -- # remove_spdk_ns 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.736 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@300 -- # return 0 00:25:19.996 ************************************ 00:25:19.996 END TEST nvmf_digest 00:25:19.996 ************************************ 00:25:19.996 00:25:19.996 real 0m44.810s 00:25:19.996 user 1m24.449s 00:25:19.996 sys 0m9.417s 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 1 -eq 1 ]] 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # run_test nvmf_host_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.996 ************************************ 00:25:19.996 START TEST nvmf_host_multipath 00:25:19.996 ************************************ 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:25:19.996 * Looking for test storage... 00:25:19.996 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@344 -- # case "$op" in 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@345 -- # : 1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # decimal 1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # decimal 2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@353 -- # local d=2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@355 -- # echo 2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@368 -- # return 0 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:19.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.996 --rc genhtml_branch_coverage=1 00:25:19.996 --rc genhtml_function_coverage=1 00:25:19.996 --rc genhtml_legend=1 00:25:19.996 --rc geninfo_all_blocks=1 00:25:19.996 --rc geninfo_unexecuted_blocks=1 00:25:19.996 00:25:19.996 ' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:19.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.996 --rc genhtml_branch_coverage=1 00:25:19.996 --rc genhtml_function_coverage=1 00:25:19.996 --rc genhtml_legend=1 00:25:19.996 --rc geninfo_all_blocks=1 00:25:19.996 --rc geninfo_unexecuted_blocks=1 00:25:19.996 00:25:19.996 ' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:19.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.996 --rc genhtml_branch_coverage=1 00:25:19.996 --rc genhtml_function_coverage=1 00:25:19.996 --rc genhtml_legend=1 00:25:19.996 --rc geninfo_all_blocks=1 00:25:19.996 --rc geninfo_unexecuted_blocks=1 00:25:19.996 00:25:19.996 ' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:19.996 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:19.996 --rc genhtml_branch_coverage=1 00:25:19.996 --rc genhtml_function_coverage=1 00:25:19.996 --rc genhtml_legend=1 00:25:19.996 --rc geninfo_all_blocks=1 00:25:19.996 --rc geninfo_unexecuted_blocks=1 00:25:19.996 00:25:19.996 ' 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # uname -s 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.996 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@5 -- # export PATH 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@51 -- # : 0 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:20.257 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@30 -- # nvmftestinit 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@460 -- # nvmf_veth_init 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:25:20.257 Cannot find device "nvmf_init_br" 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@162 -- # true 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:25:20.257 Cannot find device "nvmf_init_br2" 00:25:20.257 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@163 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:25:20.258 Cannot find device "nvmf_tgt_br" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@164 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:25:20.258 Cannot find device "nvmf_tgt_br2" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@165 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:25:20.258 Cannot find device "nvmf_init_br" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@166 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:25:20.258 Cannot find device "nvmf_init_br2" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@167 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:25:20.258 Cannot find device "nvmf_tgt_br" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@168 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:25:20.258 Cannot find device "nvmf_tgt_br2" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@169 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:25:20.258 Cannot find device "nvmf_br" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@170 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:25:20.258 Cannot find device "nvmf_init_if" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@171 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:25:20.258 Cannot find device "nvmf_init_if2" 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@172 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:20.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@173 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:20.258 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@174 -- # true 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:20.258 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:20.517 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:25:20.518 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:20.518 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:25:20.518 00:25:20.518 --- 10.0.0.3 ping statistics --- 00:25:20.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.518 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:25:20.518 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:25:20.518 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.063 ms 00:25:20.518 00:25:20.518 --- 10.0.0.4 ping statistics --- 00:25:20.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.518 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:20.518 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.518 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:25:20.518 00:25:20.518 --- 10.0.0.1 ping statistics --- 00:25:20.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.518 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:25:20.518 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.518 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.054 ms 00:25:20.518 00:25:20.518 --- 10.0.0.2 ping statistics --- 00:25:20.518 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.518 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@461 -- # return 0 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@509 -- # nvmfpid=86858 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@510 -- # waitforlisten 86858 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 86858 ']' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:20.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:20.518 01:45:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:20.777 [2024-11-17 01:45:29.015827] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:25:20.777 [2024-11-17 01:45:29.016245] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.777 [2024-11-17 01:45:29.205095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:21.035 [2024-11-17 01:45:29.331388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.035 [2024-11-17 01:45:29.331697] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.035 [2024-11-17 01:45:29.331989] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.035 [2024-11-17 01:45:29.332259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.035 [2024-11-17 01:45:29.332411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.035 [2024-11-17 01:45:29.334736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.035 [2024-11-17 01:45:29.334746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.294 [2024-11-17 01:45:29.537800] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:25:21.553 01:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:21.553 01:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:21.553 01:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.553 01:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:21.553 01:45:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:21.816 01:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.816 01:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@33 -- # nvmfapp_pid=86858 00:25:21.816 01:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:22.075 [2024-11-17 01:45:30.278672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:22.075 01:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:22.334 Malloc0 00:25:22.334 01:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:22.594 01:45:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.853 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:25:22.853 [2024-11-17 01:45:31.301790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:25:23.111 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:25:23.111 [2024-11-17 01:45:31.533906] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:25:23.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@44 -- # bdevperf_pid=86908 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@47 -- # waitforlisten 86908 /var/tmp/bdevperf.sock 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@835 -- # '[' -z 86908 ']' 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.112 01:45:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:25:24.489 01:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.489 01:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@868 -- # return 0 00:25:24.489 01:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:24.489 01:45:32 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:24.748 Nvme0n1 00:25:24.748 01:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.3 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:25.006 Nvme0n1 00:25:25.265 01:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:25.265 01:45:33 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@78 -- # sleep 1 00:25:26.203 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:25:26.203 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:26.462 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:26.722 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:25:26.722 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=86958 00:25:26.722 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:26.722 01:45:34 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:33.306 01:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:33.306 01:45:40 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.306 Attaching 4 probes... 00:25:33.306 @path[10.0.0.3, 4421]: 16307 00:25:33.306 @path[10.0.0.3, 4421]: 16888 00:25:33.306 @path[10.0.0.3, 4421]: 16907 00:25:33.306 @path[10.0.0.3, 4421]: 16698 00:25:33.306 @path[10.0.0.3, 4421]: 16640 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:33.306 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:33.307 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 86958 00:25:33.307 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:33.307 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:25:33.307 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:33.307 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:33.566 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:25:33.566 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87067 00:25:33.566 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:33.566 01:45:41 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:40.138 01:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:40.138 01:45:47 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:40.138 Attaching 4 probes... 00:25:40.138 @path[10.0.0.3, 4420]: 16207 00:25:40.138 @path[10.0.0.3, 4420]: 16385 00:25:40.138 @path[10.0.0.3, 4420]: 16598 00:25:40.138 @path[10.0.0.3, 4420]: 16748 00:25:40.138 @path[10.0.0.3, 4420]: 16602 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87067 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87180 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:40.138 01:45:48 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:46.703 Attaching 4 probes... 00:25:46.703 @path[10.0.0.3, 4421]: 11962 00:25:46.703 @path[10.0.0.3, 4421]: 16443 00:25:46.703 @path[10.0.0.3, 4421]: 16429 00:25:46.703 @path[10.0.0.3, 4421]: 16400 00:25:46.703 @path[10.0.0.3, 4421]: 16397 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87180 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:25:46.703 01:45:54 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:25:46.962 01:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n inaccessible 00:25:46.962 01:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:25:46.962 01:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87298 00:25:46.962 01:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:46.962 01:45:55 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port= 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.531 Attaching 4 probes... 00:25:53.531 00:25:53.531 00:25:53.531 00:25:53.531 00:25:53.531 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port= 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87298 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:25:53.531 01:46:01 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:25:53.791 01:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:25:53.791 01:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87409 00:25:53.791 01:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:25:53.791 01:46:02 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.359 Attaching 4 probes... 00:26:00.359 @path[10.0.0.3, 4421]: 16013 00:26:00.359 @path[10.0.0.3, 4421]: 16189 00:26:00.359 @path[10.0.0.3, 4421]: 16159 00:26:00.359 @path[10.0.0.3, 4421]: 16182 00:26:00.359 @path[10.0.0.3, 4421]: 16328 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:00.359 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87409 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:00.360 01:46:08 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@101 -- # sleep 1 00:26:01.295 01:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:26:01.295 01:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87533 00:26:01.295 01:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:01.295 01:46:09 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:07.916 01:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:07.916 01:46:15 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4420 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:07.916 Attaching 4 probes... 00:26:07.916 @path[10.0.0.3, 4420]: 15834 00:26:07.916 @path[10.0.0.3, 4420]: 16161 00:26:07.916 @path[10.0.0.3, 4420]: 15936 00:26:07.916 @path[10.0.0.3, 4420]: 16072 00:26:07.916 @path[10.0.0.3, 4420]: 16269 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4420 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87533 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 00:26:07.916 [2024-11-17 01:46:16.249241] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:26:07.916 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4421 -n optimized 00:26:08.176 01:46:16 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@111 -- # sleep 6 00:26:14.746 01:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:26:14.746 01:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@65 -- # dtrace_pid=87702 00:26:14.746 01:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 86858 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:26:14.746 01:46:22 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@66 -- # sleep 6 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@67 -- # active_port=4421 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:21.323 Attaching 4 probes... 00:26:21.323 @path[10.0.0.3, 4421]: 15963 00:26:21.323 @path[10.0.0.3, 4421]: 16272 00:26:21.323 @path[10.0.0.3, 4421]: 16108 00:26:21.323 @path[10.0.0.3, 4421]: 16122 00:26:21.323 @path[10.0.0.3, 4421]: 16122 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # sed -n 1p 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # cut -d ']' -f1 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.3," {print $2}' 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@69 -- # port=4421 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@72 -- # kill 87702 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@114 -- # killprocess 86908 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 86908 ']' 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 86908 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86908 00:26:21.323 killing process with pid 86908 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86908' 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 86908 00:26:21.323 01:46:28 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 86908 00:26:21.323 { 00:26:21.323 "results": [ 00:26:21.323 { 00:26:21.323 "job": "Nvme0n1", 00:26:21.323 "core_mask": "0x4", 00:26:21.323 "workload": "verify", 00:26:21.323 "status": "terminated", 00:26:21.323 "verify_range": { 00:26:21.323 "start": 0, 00:26:21.323 "length": 16384 00:26:21.323 }, 00:26:21.323 "queue_depth": 128, 00:26:21.323 "io_size": 4096, 00:26:21.323 "runtime": 55.338571, 00:26:21.323 "iops": 6916.983815863261, 00:26:21.323 "mibps": 27.019468030715863, 00:26:21.323 "io_failed": 0, 00:26:21.323 "io_timeout": 0, 00:26:21.323 "avg_latency_us": 18482.242329242643, 00:26:21.323 "min_latency_us": 1362.850909090909, 00:26:21.323 "max_latency_us": 7046430.72 00:26:21.323 } 00:26:21.323 ], 00:26:21.323 "core_count": 1 00:26:21.323 } 00:26:21.323 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@116 -- # wait 86908 00:26:21.323 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:21.323 [2024-11-17 01:45:31.641531] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:21.324 [2024-11-17 01:45:31.641686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86908 ] 00:26:21.324 [2024-11-17 01:45:31.806671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.324 [2024-11-17 01:45:31.893186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:21.324 [2024-11-17 01:45:32.042087] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:21.324 Running I/O for 90 seconds... 00:26:21.324 6556.00 IOPS, 25.61 MiB/s [2024-11-17T01:46:29.783Z] 7242.00 IOPS, 28.29 MiB/s [2024-11-17T01:46:29.783Z] 7606.67 IOPS, 29.71 MiB/s [2024-11-17T01:46:29.783Z] 7817.00 IOPS, 30.54 MiB/s [2024-11-17T01:46:29.783Z] 7943.20 IOPS, 31.03 MiB/s [2024-11-17T01:46:29.783Z] 8015.50 IOPS, 31.31 MiB/s [2024-11-17T01:46:29.783Z] 8061.14 IOPS, 31.49 MiB/s [2024-11-17T01:46:29.783Z] 8077.50 IOPS, 31.55 MiB/s [2024-11-17T01:46:29.783Z] [2024-11-17 01:45:41.754945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:5440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:5488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:5504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.755964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.755984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:5056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:5080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.324 [2024-11-17 01:45:41.756530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:5584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.324 [2024-11-17 01:45:41.756820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.324 [2024-11-17 01:45:41.756850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.756871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.756909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:5616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.756930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.756957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.756977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:5656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:5680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:5688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:5704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:5088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:5136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.325 [2024-11-17 01:45:41.757895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:5712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.757971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.757991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:5768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:5784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:5792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:5800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:5832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.325 [2024-11-17 01:45:41.758761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:5848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.325 [2024-11-17 01:45:41.758782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.758839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:5856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.758861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.758890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:5152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.758914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.758942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.758963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.758992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:5168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:5176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:5184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:5192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:5216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:5232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:5240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.759792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:5864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.759867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:5872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.759923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.759983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:5880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:5912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:5928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:5936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.326 [2024-11-17 01:45:41.760646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.760697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.760778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.760828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.760876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.760966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.760997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:5320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.761018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.326 [2024-11-17 01:45:41.761045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.326 [2024-11-17 01:45:41.761066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:5976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.761178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:5344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:5376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.761487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.761507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:5400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:41.763511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:5984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.763590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:5992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.763689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.763743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.763796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:6016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.763867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.763938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.763982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.764033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:41.764095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:6040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:41.764126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:21.327 8058.11 IOPS, 31.48 MiB/s [2024-11-17T01:46:29.786Z] 8075.70 IOPS, 31.55 MiB/s [2024-11-17T01:46:29.786Z] 8092.45 IOPS, 31.61 MiB/s [2024-11-17T01:46:29.786Z] 8109.75 IOPS, 31.68 MiB/s [2024-11-17T01:46:29.786Z] 8123.46 IOPS, 31.73 MiB/s [2024-11-17T01:46:29.786Z] 8137.50 IOPS, 31.79 MiB/s [2024-11-17T01:46:29.786Z] [2024-11-17 01:45:48.346811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.346877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.346970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.346996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.327 [2024-11-17 01:45:48.347703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:48.347767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:48.347836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.327 [2024-11-17 01:45:48.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.327 [2024-11-17 01:45:48.347921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.347957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:44808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:44824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:44832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:44856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:44864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:44896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.348557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:45240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:45256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:45264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.348956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:45280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.348976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:45296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:45312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:45328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:45344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.328 [2024-11-17 01:45:48.349375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.349420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.349467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:44920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.328 [2024-11-17 01:45:48.349512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:21.328 [2024-11-17 01:45:48.349538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:44928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.349557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:44936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.349611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.349659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.349705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:44960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.349750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.349812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.349863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.349910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:45376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.349956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.349982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:45392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:45408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:45424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:45456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:45464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:45480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:45488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:45496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.350756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:44968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.350803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.350910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:44984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.350962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.350999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:44992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.351019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:45000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.351067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.351114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:45016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.351161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:45024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.329 [2024-11-17 01:45:48.351222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.351269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:45520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.351316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:45528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.351362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:45536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.351409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:45544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.351455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:21.329 [2024-11-17 01:45:48.351482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.329 [2024-11-17 01:45:48.351509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.351913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.351971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.352005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.352052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.352108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.352155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.352210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:45056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:45064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:45072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.352574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.352594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:45088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.330 [2024-11-17 01:45:48.353372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:45664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:45688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:45696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:45704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:45720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.353962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.353982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:45736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:45744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:45760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:45776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:21.330 [2024-11-17 01:45:48.354420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:45792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.330 [2024-11-17 01:45:48.354441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:21.330 8019.00 IOPS, 31.32 MiB/s [2024-11-17T01:46:29.789Z] 7628.75 IOPS, 29.80 MiB/s [2024-11-17T01:46:29.789Z] 7660.82 IOPS, 29.93 MiB/s [2024-11-17T01:46:29.790Z] 7692.33 IOPS, 30.05 MiB/s [2024-11-17T01:46:29.790Z] 7724.42 IOPS, 30.17 MiB/s [2024-11-17T01:46:29.790Z] 7747.80 IOPS, 30.26 MiB/s [2024-11-17T01:46:29.790Z] 7768.10 IOPS, 30.34 MiB/s [2024-11-17T01:46:29.790Z] [2024-11-17 01:45:55.393648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.393750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.393876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.393911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.393945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.393966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.393995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.394016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.394082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:47000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.394132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:47008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.394182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:47016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.394247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:46384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:46392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:46400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:46416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:46440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:46456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:46472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.394954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.394974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:46488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:46496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:46512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:46520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:46536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:46560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:46568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.331 [2024-11-17 01:45:55.395559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:47024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.395663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:47032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.395734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:47040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.395815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:47048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.395890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:47056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.395944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:21.331 [2024-11-17 01:45:55.395975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:47064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.331 [2024-11-17 01:45:55.396012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:47072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.396079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:47080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.396144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:46584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:46600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:46608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:46616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:46624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:46640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:46648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:46656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:46680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.396967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:46688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.396989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:46696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:47088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:47096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:47104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:47112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:47120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:47128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:47136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:47144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.332 [2024-11-17 01:45:55.397455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:46704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:46712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:46720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:46728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:46736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:46744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:21.332 [2024-11-17 01:45:55.397794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:46752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.332 [2024-11-17 01:45:55.397827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.397869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:46760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.397892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.397952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.397975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:46776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:46784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:46800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:46808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:46816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:46824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.398319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:47152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:47160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:47168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:47176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:47184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:47192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:47200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:47208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:47216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:47224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:47232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.398951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.398980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:47240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.399001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:47248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.399049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:47256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.399098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.399147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.333 [2024-11-17 01:45:55.399206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:46840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:46848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:46864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:46872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:46880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:46888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:46896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:46904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:46912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.399932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.399979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:46928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.333 [2024-11-17 01:45:55.400016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:21.333 [2024-11-17 01:45:55.400046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:46936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.334 [2024-11-17 01:45:55.400068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.400099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.334 [2024-11-17 01:45:55.400120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:46952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.334 [2024-11-17 01:45:55.401073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:47280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:47288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:47296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:47312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:47320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:47328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:47336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:47344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:47352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:47360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:47368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:47376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.401969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:47384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.401991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.402027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:47392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.402049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:45:55.402098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:47400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:45:55.402120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:21.334 7710.82 IOPS, 30.12 MiB/s [2024-11-17T01:46:29.793Z] 7375.57 IOPS, 28.81 MiB/s [2024-11-17T01:46:29.793Z] 7068.25 IOPS, 27.61 MiB/s [2024-11-17T01:46:29.793Z] 6785.52 IOPS, 26.51 MiB/s [2024-11-17T01:46:29.793Z] 6524.54 IOPS, 25.49 MiB/s [2024-11-17T01:46:29.793Z] 6282.89 IOPS, 24.54 MiB/s [2024-11-17T01:46:29.793Z] 6058.50 IOPS, 23.67 MiB/s [2024-11-17T01:46:29.793Z] 5894.45 IOPS, 23.03 MiB/s [2024-11-17T01:46:29.793Z] 5965.03 IOPS, 23.30 MiB/s [2024-11-17T01:46:29.793Z] 6032.74 IOPS, 23.57 MiB/s [2024-11-17T01:46:29.793Z] 6097.22 IOPS, 23.82 MiB/s [2024-11-17T01:46:29.793Z] 6159.24 IOPS, 24.06 MiB/s [2024-11-17T01:46:29.793Z] 6218.32 IOPS, 24.29 MiB/s [2024-11-17T01:46:29.793Z] 6268.54 IOPS, 24.49 MiB/s [2024-11-17T01:46:29.793Z] [2024-11-17 01:46:08.730270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:62072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:62080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:62096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:62104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:62112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:62120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:62136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:62152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.730980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.730997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.731016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:62168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.731034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.731053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:62176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.731070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.731089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:62184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.731106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.731153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:62192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.334 [2024-11-17 01:46:08.731172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.334 [2024-11-17 01:46:08.731190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.334 [2024-11-17 01:46:08.731208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:61568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:61584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.731804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:62200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.731857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:62208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.731898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:62216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.731951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.731970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.731988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:62240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:62256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:62264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:62296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:62304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.335 [2024-11-17 01:46:08.732445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.335 [2024-11-17 01:46:08.732778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.335 [2024-11-17 01:46:08.732797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.732814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.732849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.732882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.732904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.732922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.732942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.732959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.732978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.732996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:62328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:62344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:62352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:62376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:62384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.336 [2024-11-17 01:46:08.733423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:61792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.733974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.733992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.336 [2024-11-17 01:46:08.734288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.336 [2024-11-17 01:46:08.734307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:62392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:62400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:62408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:62416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:62440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:62448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:21.337 [2024-11-17 01:46:08.734628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:61952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:62000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.734972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:62008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.734989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:62016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.735025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:62024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.735062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:62032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.735099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:62040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.337 [2024-11-17 01:46:08.735145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bf00 is same with the state(6) to be set 00:26:21.337 [2024-11-17 01:46:08.735188] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735203] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62048 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735267] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62056 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:62064 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735402] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62456 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735449] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735462] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62464 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735522] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62472 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735567] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.337 [2024-11-17 01:46:08.735581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.337 [2024-11-17 01:46:08.735595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62480 len:8 PRP1 0x0 PRP2 0x0 00:26:21.337 [2024-11-17 01:46:08.735645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.337 [2024-11-17 01:46:08.735665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.338 [2024-11-17 01:46:08.735678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.338 [2024-11-17 01:46:08.735693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62488 len:8 PRP1 0x0 PRP2 0x0 00:26:21.338 [2024-11-17 01:46:08.735709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.735726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.338 [2024-11-17 01:46:08.735739] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.338 [2024-11-17 01:46:08.735753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62496 len:8 PRP1 0x0 PRP2 0x0 00:26:21.338 [2024-11-17 01:46:08.735771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.735789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.338 [2024-11-17 01:46:08.735802] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.338 [2024-11-17 01:46:08.735830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62504 len:8 PRP1 0x0 PRP2 0x0 00:26:21.338 [2024-11-17 01:46:08.735847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.735865] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:21.338 [2024-11-17 01:46:08.735879] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:21.338 [2024-11-17 01:46:08.735893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62512 len:8 PRP1 0x0 PRP2 0x0 00:26:21.338 [2024-11-17 01:46:08.735909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.736259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.338 [2024-11-17 01:46:08.736291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.736312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.338 [2024-11-17 01:46:08.736329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.736346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.338 [2024-11-17 01:46:08.736363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.736380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.338 [2024-11-17 01:46:08.736397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.736430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:0014000c cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:21.338 [2024-11-17 01:46:08.736451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.338 [2024-11-17 01:46:08.736477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:21.338 [2024-11-17 01:46:08.737652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.338 [2024-11-17 01:46:08.737709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:26:21.338 [2024-11-17 01:46:08.738146] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:21.338 [2024-11-17 01:46:08.738187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002b500 with addr=10.0.0.3, port=4421 00:26:21.338 [2024-11-17 01:46:08.738210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:21.338 [2024-11-17 01:46:08.738254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002b500 (9): Bad file descriptor 00:26:21.338 [2024-11-17 01:46:08.738296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:21.338 [2024-11-17 01:46:08.738320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:21.338 [2024-11-17 01:46:08.738339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:21.338 [2024-11-17 01:46:08.738358] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:21.338 [2024-11-17 01:46:08.738384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:21.338 6316.33 IOPS, 24.67 MiB/s [2024-11-17T01:46:29.797Z] 6353.68 IOPS, 24.82 MiB/s [2024-11-17T01:46:29.797Z] 6399.79 IOPS, 25.00 MiB/s [2024-11-17T01:46:29.797Z] 6442.87 IOPS, 25.17 MiB/s [2024-11-17T01:46:29.797Z] 6481.95 IOPS, 25.32 MiB/s [2024-11-17T01:46:29.797Z] 6520.10 IOPS, 25.47 MiB/s [2024-11-17T01:46:29.797Z] 6558.07 IOPS, 25.62 MiB/s [2024-11-17T01:46:29.797Z] 6588.09 IOPS, 25.73 MiB/s [2024-11-17T01:46:29.797Z] 6622.86 IOPS, 25.87 MiB/s [2024-11-17T01:46:29.797Z] 6653.69 IOPS, 25.99 MiB/s [2024-11-17T01:46:29.797Z] [2024-11-17 01:46:18.799453] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:21.338 6681.91 IOPS, 26.10 MiB/s [2024-11-17T01:46:29.797Z] 6711.83 IOPS, 26.22 MiB/s [2024-11-17T01:46:29.797Z] 6744.83 IOPS, 26.35 MiB/s [2024-11-17T01:46:29.797Z] 6774.06 IOPS, 26.46 MiB/s [2024-11-17T01:46:29.797Z] 6795.36 IOPS, 26.54 MiB/s [2024-11-17T01:46:29.797Z] 6820.39 IOPS, 26.64 MiB/s [2024-11-17T01:46:29.797Z] 6844.31 IOPS, 26.74 MiB/s [2024-11-17T01:46:29.797Z] 6869.28 IOPS, 26.83 MiB/s [2024-11-17T01:46:29.797Z] 6891.41 IOPS, 26.92 MiB/s [2024-11-17T01:46:29.797Z] 6913.16 IOPS, 27.00 MiB/s [2024-11-17T01:46:29.797Z] Received shutdown signal, test time was about 55.339377 seconds 00:26:21.338 00:26:21.338 Latency(us) 00:26:21.338 [2024-11-17T01:46:29.797Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.338 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:21.338 Verification LBA range: start 0x0 length 0x4000 00:26:21.338 Nvme0n1 : 55.34 6916.98 27.02 0.00 0.00 18482.24 1362.85 7046430.72 00:26:21.338 [2024-11-17T01:46:29.797Z] =================================================================================================================== 00:26:21.338 [2024-11-17T01:46:29.797Z] Total : 6916.98 27.02 0.00 0.00 18482.24 1362.85 7046430.72 00:26:21.338 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- host/multipath.sh@125 -- # nvmftestfini 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@121 -- # sync 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@124 -- # set +e 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:21.598 01:46:29 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:21.598 rmmod nvme_tcp 00:26:21.598 rmmod nvme_fabrics 00:26:21.598 rmmod nvme_keyring 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@128 -- # set -e 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@129 -- # return 0 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@517 -- # '[' -n 86858 ']' 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@518 -- # killprocess 86858 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@954 -- # '[' -z 86858 ']' 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@958 -- # kill -0 86858 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # uname 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86858 00:26:21.598 killing process with pid 86858 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86858' 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@973 -- # kill 86858 00:26:21.598 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@978 -- # wait 86858 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@297 -- # iptr 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-save 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:22.536 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:26:22.795 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:22.795 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:26:22.795 01:46:30 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@246 -- # remove_spdk_ns 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- nvmf/common.sh@300 -- # return 0 00:26:22.795 ************************************ 00:26:22.795 END TEST nvmf_host_multipath 00:26:22.795 ************************************ 00:26:22.795 00:26:22.795 real 1m2.992s 00:26:22.795 user 2m54.421s 00:26:22.795 sys 0m16.656s 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:22.795 01:46:31 nvmf_tcp.nvmf_host.nvmf_host_multipath -- common/autotest_common.sh@10 -- # set +x 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@43 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:23.055 ************************************ 00:26:23.055 START TEST nvmf_timeout 00:26:23.055 ************************************ 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:26:23.055 * Looking for test storage... 00:26:23.055 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lcov --version 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@345 -- # : 1 00:26:23.055 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=1 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 1 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@353 -- # local d=2 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@355 -- # echo 2 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@368 -- # return 0 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.056 --rc genhtml_branch_coverage=1 00:26:23.056 --rc genhtml_function_coverage=1 00:26:23.056 --rc genhtml_legend=1 00:26:23.056 --rc geninfo_all_blocks=1 00:26:23.056 --rc geninfo_unexecuted_blocks=1 00:26:23.056 00:26:23.056 ' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.056 --rc genhtml_branch_coverage=1 00:26:23.056 --rc genhtml_function_coverage=1 00:26:23.056 --rc genhtml_legend=1 00:26:23.056 --rc geninfo_all_blocks=1 00:26:23.056 --rc geninfo_unexecuted_blocks=1 00:26:23.056 00:26:23.056 ' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.056 --rc genhtml_branch_coverage=1 00:26:23.056 --rc genhtml_function_coverage=1 00:26:23.056 --rc genhtml_legend=1 00:26:23.056 --rc geninfo_all_blocks=1 00:26:23.056 --rc geninfo_unexecuted_blocks=1 00:26:23.056 00:26:23.056 ' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:23.056 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:23.056 --rc genhtml_branch_coverage=1 00:26:23.056 --rc genhtml_function_coverage=1 00:26:23.056 --rc genhtml_legend=1 00:26:23.056 --rc geninfo_all_blocks=1 00:26:23.056 --rc geninfo_unexecuted_blocks=1 00:26:23.056 00:26:23.056 ' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@5 -- # export PATH 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@51 -- # : 0 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:23.056 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@19 -- # nvmftestinit 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:23.056 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@460 -- # nvmf_veth_init 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:26:23.316 Cannot find device "nvmf_init_br" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@162 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:26:23.316 Cannot find device "nvmf_init_br2" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@163 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:26:23.316 Cannot find device "nvmf_tgt_br" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@164 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:26:23.316 Cannot find device "nvmf_tgt_br2" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@165 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:26:23.316 Cannot find device "nvmf_init_br" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@166 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:26:23.316 Cannot find device "nvmf_init_br2" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@167 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:26:23.316 Cannot find device "nvmf_tgt_br" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@168 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:26:23.316 Cannot find device "nvmf_tgt_br2" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@169 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:26:23.316 Cannot find device "nvmf_br" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@170 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:26:23.316 Cannot find device "nvmf_init_if" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@171 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:26:23.316 Cannot find device "nvmf_init_if2" 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@172 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:23.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@173 -- # true 00:26:23.316 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:23.316 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@174 -- # true 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:23.317 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:26:23.576 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:23.576 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.131 ms 00:26:23.576 00:26:23.576 --- 10.0.0.3 ping statistics --- 00:26:23.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.576 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:26:23.576 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:26:23.576 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:26:23.576 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.048 ms 00:26:23.576 00:26:23.576 --- 10.0.0.4 ping statistics --- 00:26:23.576 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.576 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:23.577 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:23.577 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:26:23.577 00:26:23.577 --- 10.0.0.1 ping statistics --- 00:26:23.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.577 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:26:23.577 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:23.577 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.049 ms 00:26:23.577 00:26:23.577 --- 10.0.0.2 ping statistics --- 00:26:23.577 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:23.577 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@461 -- # return 0 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@509 -- # nvmfpid=88087 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@510 -- # waitforlisten 88087 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88087 ']' 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:23.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:23.577 01:46:31 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:23.836 [2024-11-17 01:46:32.041849] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:23.836 [2024-11-17 01:46:32.042017] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:23.836 [2024-11-17 01:46:32.221456] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:24.095 [2024-11-17 01:46:32.303348] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.095 [2024-11-17 01:46:32.303411] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.095 [2024-11-17 01:46:32.303445] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.095 [2024-11-17 01:46:32.303468] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.095 [2024-11-17 01:46:32.303480] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.095 [2024-11-17 01:46:32.305372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.095 [2024-11-17 01:46:32.305393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.095 [2024-11-17 01:46:32.453554] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:24.663 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:24.922 [2024-11-17 01:46:33.248660] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:24.922 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:25.182 Malloc0 00:26:25.182 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.442 01:46:33 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:25.701 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:25.961 [2024-11-17 01:46:34.268860] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@32 -- # bdevperf_pid=88138 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@34 -- # waitforlisten 88138 /var/tmp/bdevperf.sock 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88138 ']' 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:25.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:25.961 01:46:34 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:25.961 [2024-11-17 01:46:34.370960] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:25.961 [2024-11-17 01:46:34.371121] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88138 ] 00:26:26.220 [2024-11-17 01:46:34.547429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.220 [2024-11-17 01:46:34.671355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:26.479 [2024-11-17 01:46:34.839112] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:27.047 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:27.047 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:27.047 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:27.306 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:26:27.565 NVMe0n1 00:26:27.565 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@51 -- # rpc_pid=88161 00:26:27.565 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:27.565 01:46:35 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@53 -- # sleep 1 00:26:27.824 Running I/O for 10 seconds... 00:26:28.762 01:46:36 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:28.762 6549.00 IOPS, 25.58 MiB/s [2024-11-17T01:46:37.221Z] [2024-11-17 01:46:37.215573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.762 [2024-11-17 01:46:37.215803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215917] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.215995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216122] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216595] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.763 [2024-11-17 01:46:37.216903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.216927] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.216941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.216953] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.216966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.216978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.216992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002c80 is same with the state(6) to be set 00:26:28.764 [2024-11-17 01:46:37.217463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:58656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.217607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.217660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:58664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.217681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.217698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:58672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:58680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:58696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:58712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:58720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:58728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.218776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.218809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:58736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.219265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.219291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:58744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.219309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.219326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:58752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:28.764 [2024-11-17 01:46:37.219343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:28.764 [2024-11-17 01:46:37.219359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:58760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.219744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.219782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:58768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.219825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.219847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:58776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.219865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.219882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.219899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.219916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:58792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:58800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:58808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:58816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:58824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:58832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:58840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.220969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.220986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:58848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:58856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:58864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:58872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:58880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:58888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:58896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.221821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.221973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:58904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.222345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.222397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:58912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.222422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.222439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:58920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.222455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.222471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:58928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.222489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.222504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:58936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.222773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.222920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.223052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.223220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:58952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.223326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.223347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.223364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.223380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:58968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.223396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.223411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:58976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.026 [2024-11-17 01:46:37.223427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.026 [2024-11-17 01:46:37.223691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:58984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.223822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.223848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:58992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.224191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.224213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.224233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.224250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:59008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.224266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.224281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.224297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.224589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:59024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.224718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:59032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:59040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:59048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:59064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:59080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.225869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.225886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:59096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.226133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.226157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:59104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.226435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.226461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:59112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.226687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.226720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.226741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.226757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:59128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.226773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.226788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:59136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.226928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.226957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.227304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.227329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.227346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.227361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:59160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.227604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.227661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:59168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.227681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.227698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:59176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.227714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.227969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:59184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.228009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.228026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.228284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.228307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.228540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.228571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.228592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.228958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:59216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.228997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.229018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:59224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.229035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.229149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:59232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.229319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.229561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.229593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.229868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:59248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.230034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:59256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.230298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:59264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.230339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:59272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.230595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:59280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.230632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:59288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.027 [2024-11-17 01:46:37.230887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:59296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.027 [2024-11-17 01:46:37.230921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.230940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:59304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.230956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.230971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.231116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.231234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.231255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.231271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.231287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.231672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:59336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.231717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.231738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:59344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.231756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.231889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:59352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.232032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.232055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.232072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.232209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:59368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.232326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.232349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.232367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.232501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:59384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.232634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.232767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:59392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.233036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.233073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:59400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.233093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.233110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.233127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.233396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:59416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.233417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.233434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:59424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.233690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.233825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.233960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.234096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:59440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.234232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.234266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.234407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.234641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:59456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.234673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.234692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:59464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.234959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.234981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.234999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.235138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.235250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.235273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.235290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.235518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.235550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.235569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:59504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.235855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.235960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.235995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.236012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:59520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.236241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.236268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.236405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.236530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.028 [2024-11-17 01:46:37.236554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.236833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:59560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.236875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.236896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:59568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.236913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.237029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:59576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.237184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.237321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:59584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.237347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.237363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:59592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.237613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.237750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:59600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.237882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.237917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:59608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.238052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.238189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:59616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.238332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.028 [2024-11-17 01:46:37.238461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:59624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.028 [2024-11-17 01:46:37.238489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.238733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:59632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.029 [2024-11-17 01:46:37.238768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.238996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:59640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.029 [2024-11-17 01:46:37.239020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.239037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:59648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.029 [2024-11-17 01:46:37.239271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.239300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:59656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.029 [2024-11-17 01:46:37.239318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.239577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:59664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.029 [2024-11-17 01:46:37.239729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.239971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:59672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:29.029 [2024-11-17 01:46:37.240024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.240044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.029 [2024-11-17 01:46:37.240060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.240075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:26:29.029 [2024-11-17 01:46:37.240200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:29.029 [2024-11-17 01:46:37.240348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:29.029 [2024-11-17 01:46:37.240474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59552 len:8 PRP1 0x0 PRP2 0x0 00:26:29.029 [2024-11-17 01:46:37.240493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.241163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.029 [2024-11-17 01:46:37.241242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.241276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.029 [2024-11-17 01:46:37.241292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.241538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.029 [2024-11-17 01:46:37.241560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.241576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:29.029 [2024-11-17 01:46:37.241822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:29.029 [2024-11-17 01:46:37.241874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:29.029 [2024-11-17 01:46:37.242195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:29.029 [2024-11-17 01:46:37.242240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:29.029 01:46:37 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@56 -- # sleep 2 00:26:29.029 [2024-11-17 01:46:37.242377] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.029 [2024-11-17 01:46:37.242424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:29.029 [2024-11-17 01:46:37.242443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:29.029 [2024-11-17 01:46:37.242472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:29.029 [2024-11-17 01:46:37.242496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:29.029 [2024-11-17 01:46:37.242515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:29.029 [2024-11-17 01:46:37.242530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:29.029 [2024-11-17 01:46:37.242546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:29.029 [2024-11-17 01:46:37.242561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:30.904 3666.00 IOPS, 14.32 MiB/s [2024-11-17T01:46:39.363Z] 2444.00 IOPS, 9.55 MiB/s [2024-11-17T01:46:39.363Z] [2024-11-17 01:46:39.242694] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.904 [2024-11-17 01:46:39.242787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:30.904 [2024-11-17 01:46:39.242835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:30.904 [2024-11-17 01:46:39.243340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:30.904 [2024-11-17 01:46:39.243387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:30.904 [2024-11-17 01:46:39.243423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:30.904 [2024-11-17 01:46:39.243439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:30.904 [2024-11-17 01:46:39.243457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:30.904 [2024-11-17 01:46:39.243487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:30.904 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # get_controller 00:26:30.904 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.904 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:31.163 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:26:31.163 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # get_bdev 00:26:31.163 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:31.163 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:31.422 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:26:31.422 01:46:39 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@61 -- # sleep 5 00:26:33.059 1833.00 IOPS, 7.16 MiB/s [2024-11-17T01:46:41.518Z] 1466.40 IOPS, 5.73 MiB/s [2024-11-17T01:46:41.518Z] [2024-11-17 01:46:41.243873] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:33.059 [2024-11-17 01:46:41.243957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:26:33.059 [2024-11-17 01:46:41.243978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:26:33.059 [2024-11-17 01:46:41.244028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:26:33.059 [2024-11-17 01:46:41.244055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:33.059 [2024-11-17 01:46:41.244071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:33.059 [2024-11-17 01:46:41.244087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:33.059 [2024-11-17 01:46:41.244104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:33.059 [2024-11-17 01:46:41.244118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:34.932 1222.00 IOPS, 4.77 MiB/s [2024-11-17T01:46:43.391Z] 1047.43 IOPS, 4.09 MiB/s [2024-11-17T01:46:43.391Z] [2024-11-17 01:46:43.244492] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:34.932 [2024-11-17 01:46:43.244560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:26:34.932 [2024-11-17 01:46:43.244595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:26:34.932 [2024-11-17 01:46:43.244609] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] already in failed state 00:26:34.932 [2024-11-17 01:46:43.244628] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:26:35.869 916.50 IOPS, 3.58 MiB/s 00:26:35.869 Latency(us) 00:26:35.869 [2024-11-17T01:46:44.328Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.869 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:35.869 Verification LBA range: start 0x0 length 0x4000 00:26:35.869 NVMe0n1 : 8.16 898.85 3.51 15.69 0.00 139787.27 3872.58 7046430.72 00:26:35.869 [2024-11-17T01:46:44.328Z] =================================================================================================================== 00:26:35.869 [2024-11-17T01:46:44.328Z] Total : 898.85 3.51 15.69 0.00 139787.27 3872.58 7046430.72 00:26:35.869 { 00:26:35.869 "results": [ 00:26:35.869 { 00:26:35.869 "job": "NVMe0n1", 00:26:35.869 "core_mask": "0x4", 00:26:35.869 "workload": "verify", 00:26:35.869 "status": "finished", 00:26:35.869 "verify_range": { 00:26:35.869 "start": 0, 00:26:35.869 "length": 16384 00:26:35.869 }, 00:26:35.869 "queue_depth": 128, 00:26:35.869 "io_size": 4096, 00:26:35.869 "runtime": 8.157129, 00:26:35.869 "iops": 898.8456600355346, 00:26:35.869 "mibps": 3.511115859513807, 00:26:35.869 "io_failed": 128, 00:26:35.869 "io_timeout": 0, 00:26:35.869 "avg_latency_us": 139787.26774360225, 00:26:35.869 "min_latency_us": 3872.581818181818, 00:26:35.869 "max_latency_us": 7046430.72 00:26:35.869 } 00:26:35.869 ], 00:26:35.869 "core_count": 1 00:26:35.869 } 00:26:36.438 01:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # get_controller 00:26:36.438 01:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:36.438 01:46:44 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@41 -- # jq -r '.[].name' 00:26:36.698 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:26:36.698 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # get_bdev 00:26:36.698 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:26:36.698 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@37 -- # jq -r '.[].name' 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@65 -- # wait 88161 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@67 -- # killprocess 88138 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88138 ']' 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88138 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88138 00:26:36.957 killing process with pid 88138 00:26:36.957 Received shutdown signal, test time was about 9.192735 seconds 00:26:36.957 00:26:36.957 Latency(us) 00:26:36.957 [2024-11-17T01:46:45.416Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:36.957 [2024-11-17T01:46:45.416Z] =================================================================================================================== 00:26:36.957 [2024-11-17T01:46:45.416Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88138' 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88138 00:26:36.957 01:46:45 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88138 00:26:37.895 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:38.156 [2024-11-17 01:46:46.366188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:38.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@74 -- # bdevperf_pid=88291 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@76 -- # waitforlisten 88291 /var/tmp/bdevperf.sock 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88291 ']' 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.156 01:46:46 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:38.156 [2024-11-17 01:46:46.492423] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:26:38.156 [2024-11-17 01:46:46.492901] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88291 ] 00:26:38.436 [2024-11-17 01:46:46.679748] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.436 [2024-11-17 01:46:46.792166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:38.709 [2024-11-17 01:46:46.948798] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:26:38.968 01:46:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.968 01:46:47 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:38.968 01:46:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:39.536 01:46:47 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:26:39.795 NVMe0n1 00:26:39.795 01:46:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:39.795 01:46:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@84 -- # rpc_pid=88312 00:26:39.795 01:46:48 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@86 -- # sleep 1 00:26:39.795 Running I/O for 10 seconds... 00:26:40.732 01:46:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:40.992 6549.00 IOPS, 25.58 MiB/s [2024-11-17T01:46:49.451Z] [2024-11-17 01:46:49.298055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.992 [2024-11-17 01:46:49.298110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:60712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:60720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:60728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:60736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:60744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:60752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:60760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:60768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:60776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:60784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:60792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:60800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:60808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:60816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:60824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.992 [2024-11-17 01:46:49.298602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.992 [2024-11-17 01:46:49.298618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:60832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:60840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:60848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:60856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:60864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:60872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:60880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:60888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:60896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:60904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:60912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.298975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:60920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.298988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:60928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:60936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:60944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:60952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:60960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:60968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:60976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:60984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:60992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.299958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.299987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.993 [2024-11-17 01:46:49.300448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.993 [2024-11-17 01:46:49.300467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.300798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.300811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.301145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.301584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.302040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.302410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.302861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.303954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.303975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.304975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.304989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.994 [2024-11-17 01:46:49.305007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.994 [2024-11-17 01:46:49.305021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.305577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.305608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:60600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.305641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.305690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.305946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:60616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.305980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:60632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:60640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:60656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:60664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:60672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:60680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:60704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-17 01:46:49.306458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.995 [2024-11-17 01:46:49.306488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.306505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b500 is same with the state(6) to be set 00:26:40.995 [2024-11-17 01:46:49.306524] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:40.995 [2024-11-17 01:46:49.306539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:40.995 [2024-11-17 01:46:49.306562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61600 len:8 PRP1 0x0 PRP2 0x0 00:26:40.995 [2024-11-17 01:46:49.306579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.307310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.995 [2024-11-17 01:46:49.307789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.308267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.995 [2024-11-17 01:46:49.308619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.309093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.995 [2024-11-17 01:46:49.309401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.309434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:40.995 [2024-11-17 01:46:49.309547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:40.995 [2024-11-17 01:46:49.309566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:40.996 [2024-11-17 01:46:49.309894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.996 [2024-11-17 01:46:49.309941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:40.996 [2024-11-17 01:46:49.310080] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:40.996 [2024-11-17 01:46:49.310111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:40.996 [2024-11-17 01:46:49.310132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:40.996 [2024-11-17 01:46:49.310161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:40.996 [2024-11-17 01:46:49.310205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:40.996 [2024-11-17 01:46:49.310220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:40.996 [2024-11-17 01:46:49.310238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:40.996 [2024-11-17 01:46:49.310256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:40.996 [2024-11-17 01:46:49.310273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:40.996 01:46:49 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@90 -- # sleep 1 00:26:41.932 3786.50 IOPS, 14.79 MiB/s [2024-11-17T01:46:50.391Z] [2024-11-17 01:46:50.310429] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:41.932 [2024-11-17 01:46:50.310509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:41.932 [2024-11-17 01:46:50.310534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:41.932 [2024-11-17 01:46:50.310565] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:41.932 [2024-11-17 01:46:50.310595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:41.932 [2024-11-17 01:46:50.310609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:41.932 [2024-11-17 01:46:50.310627] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:41.932 [2024-11-17 01:46:50.310642] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:41.932 [2024-11-17 01:46:50.310659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:41.932 01:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:42.191 [2024-11-17 01:46:50.579174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:42.191 01:46:50 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@92 -- # wait 88312 00:26:43.017 2524.33 IOPS, 9.86 MiB/s [2024-11-17T01:46:51.476Z] [2024-11-17 01:46:51.331524] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:44.892 1893.25 IOPS, 7.40 MiB/s [2024-11-17T01:46:54.287Z] 2936.40 IOPS, 11.47 MiB/s [2024-11-17T01:46:55.225Z] 3895.00 IOPS, 15.21 MiB/s [2024-11-17T01:46:56.161Z] 4572.43 IOPS, 17.86 MiB/s [2024-11-17T01:46:57.539Z] 5086.88 IOPS, 19.87 MiB/s [2024-11-17T01:46:58.476Z] 5494.78 IOPS, 21.46 MiB/s [2024-11-17T01:46:58.476Z] 5810.30 IOPS, 22.70 MiB/s 00:26:50.017 Latency(us) 00:26:50.017 [2024-11-17T01:46:58.476Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:50.017 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:50.017 Verification LBA range: start 0x0 length 0x4000 00:26:50.017 NVMe0n1 : 10.01 5814.61 22.71 0.00 0.00 21986.38 1444.77 3035150.89 00:26:50.017 [2024-11-17T01:46:58.476Z] =================================================================================================================== 00:26:50.017 [2024-11-17T01:46:58.476Z] Total : 5814.61 22.71 0.00 0.00 21986.38 1444.77 3035150.89 00:26:50.017 { 00:26:50.017 "results": [ 00:26:50.017 { 00:26:50.017 "job": "NVMe0n1", 00:26:50.017 "core_mask": "0x4", 00:26:50.017 "workload": "verify", 00:26:50.017 "status": "finished", 00:26:50.017 "verify_range": { 00:26:50.017 "start": 0, 00:26:50.017 "length": 16384 00:26:50.017 }, 00:26:50.017 "queue_depth": 128, 00:26:50.017 "io_size": 4096, 00:26:50.017 "runtime": 10.009619, 00:26:50.017 "iops": 5814.6069295944235, 00:26:50.017 "mibps": 22.713308318728217, 00:26:50.017 "io_failed": 0, 00:26:50.017 "io_timeout": 0, 00:26:50.017 "avg_latency_us": 21986.377443324345, 00:26:50.017 "min_latency_us": 1444.770909090909, 00:26:50.017 "max_latency_us": 3035150.8945454545 00:26:50.017 } 00:26:50.017 ], 00:26:50.017 "core_count": 1 00:26:50.017 } 00:26:50.017 01:46:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@97 -- # rpc_pid=88417 00:26:50.017 01:46:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:50.017 01:46:58 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@98 -- # sleep 1 00:26:50.017 Running I/O for 10 seconds... 00:26:50.952 01:46:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:51.212 8244.00 IOPS, 32.20 MiB/s [2024-11-17T01:46:59.671Z] [2024-11-17 01:46:59.447285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:51.212 [2024-11-17 01:46:59.447343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004480 is same with the state(6) to be set 00:26:51.212 [2024-11-17 01:46:59.447432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.212 [2024-11-17 01:46:59.447471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.212 [2024-11-17 01:46:59.447500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.212 [2024-11-17 01:46:59.447515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.212 [2024-11-17 01:46:59.447529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.212 [2024-11-17 01:46:59.447542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.212 [2024-11-17 01:46:59.447556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.212 [2024-11-17 01:46:59.447569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.212 [2024-11-17 01:46:59.447582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.212 [2024-11-17 01:46:59.447594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.212 [2024-11-17 01:46:59.447633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.212 [2024-11-17 01:46:59.447663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.212 [2024-11-17 01:46:59.447678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:76408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.447691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:76416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.447718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:76424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.447744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:76432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.447773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.447800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.447842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.447906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.447950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.447980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:76032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:76040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:76048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:76056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:76064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:76072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:76080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:76088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:76096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:76104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:76112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.448376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.448996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:76448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:76456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:76496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:76512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:76536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:76544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:76552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:76560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:76568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.213 [2024-11-17 01:46:59.449446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:76128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.449472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:76136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.449498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.213 [2024-11-17 01:46:59.449511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:76144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.213 [2024-11-17 01:46:59.449523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:76152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.449549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:76160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.449575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:76168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.449600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:76176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.449626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:76184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.449652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:76584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:76608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:76632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.449975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:76664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.449987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:76672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:76680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:76688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:76696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:76712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:76720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:76728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:76192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:76200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:76208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:76216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:76224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:76240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:76248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.214 [2024-11-17 01:46:59.450400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:76744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:76760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:76768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:76776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.214 [2024-11-17 01:46:59.450554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.214 [2024-11-17 01:46:59.450567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:76784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:76800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:76816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:76824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:76840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:76848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:76864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:76872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.450898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:76256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.450923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:76264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.450949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:76272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.450974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.450988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:76280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:76288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:76296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:76304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:76312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:76320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:76328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:76336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:76344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:76352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:76360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:76368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:76376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:51.215 [2024-11-17 01:46:59.451336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:76880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:76888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:76904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:76912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:76920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:76944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:76952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:76960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.215 [2024-11-17 01:46:59.451699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.215 [2024-11-17 01:46:59.451717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:76976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-11-17 01:46:59.451731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.216 [2024-11-17 01:46:59.451746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:76984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:51.216 [2024-11-17 01:46:59.451760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.216 [2024-11-17 01:46:59.451774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002bc80 is same with the state(6) to be set 00:26:51.216 [2024-11-17 01:46:59.451794] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:51.216 [2024-11-17 01:46:59.451807] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:51.216 [2024-11-17 01:46:59.451820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:76992 len:8 PRP1 0x0 PRP2 0x0 00:26:51.216 [2024-11-17 01:46:59.451847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:51.216 [2024-11-17 01:46:59.452368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:51.216 [2024-11-17 01:46:59.452465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:51.216 [2024-11-17 01:46:59.452587] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:51.216 [2024-11-17 01:46:59.452616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:51.216 [2024-11-17 01:46:59.452632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:51.216 [2024-11-17 01:46:59.452656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:51.216 [2024-11-17 01:46:59.452679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:51.216 [2024-11-17 01:46:59.452692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:51.216 [2024-11-17 01:46:59.452706] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:51.216 [2024-11-17 01:46:59.452720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:51.216 [2024-11-17 01:46:59.452734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:51.216 01:46:59 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@101 -- # sleep 3 00:26:52.152 4748.50 IOPS, 18.55 MiB/s [2024-11-17T01:47:00.611Z] [2024-11-17 01:47:00.452904] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:52.152 [2024-11-17 01:47:00.453301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:52.152 [2024-11-17 01:47:00.453684] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:52.152 [2024-11-17 01:47:00.454107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:52.152 [2024-11-17 01:47:00.454503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:52.152 [2024-11-17 01:47:00.454875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:52.152 [2024-11-17 01:47:00.455304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:52.152 [2024-11-17 01:47:00.455567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:52.152 [2024-11-17 01:47:00.456060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:53.088 3165.67 IOPS, 12.37 MiB/s [2024-11-17T01:47:01.547Z] [2024-11-17 01:47:01.456654] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:53.088 [2024-11-17 01:47:01.457074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:53.088 [2024-11-17 01:47:01.457510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:53.088 [2024-11-17 01:47:01.457979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:53.088 [2024-11-17 01:47:01.458486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:53.088 [2024-11-17 01:47:01.458961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:53.088 [2024-11-17 01:47:01.459358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:53.088 [2024-11-17 01:47:01.459789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:53.088 [2024-11-17 01:47:01.459833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:54.024 2374.25 IOPS, 9.27 MiB/s [2024-11-17T01:47:02.483Z] [2024-11-17 01:47:02.460375] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.024 [2024-11-17 01:47:02.460728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002ab00 with addr=10.0.0.3, port=4420 00:26:54.024 [2024-11-17 01:47:02.461147] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002ab00 is same with the state(6) to be set 00:26:54.024 [2024-11-17 01:47:02.461785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002ab00 (9): Bad file descriptor 00:26:54.024 [2024-11-17 01:47:02.462456] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Ctrlr is in error state 00:26:54.024 [2024-11-17 01:47:02.462838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] controller reinitialization failed 00:26:54.024 [2024-11-17 01:47:02.463196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:54.024 [2024-11-17 01:47:02.463463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller failed. 00:26:54.024 [2024-11-17 01:47:02.463927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:54.024 01:47:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:26:54.283 [2024-11-17 01:47:02.722123] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:26:54.541 01:47:02 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@103 -- # wait 88417 00:26:55.108 1899.40 IOPS, 7.42 MiB/s [2024-11-17T01:47:03.567Z] [2024-11-17 01:47:03.486984] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 4] Resetting controller successful. 00:26:56.978 2771.67 IOPS, 10.83 MiB/s [2024-11-17T01:47:06.373Z] 3647.00 IOPS, 14.25 MiB/s [2024-11-17T01:47:07.310Z] 4298.25 IOPS, 16.79 MiB/s [2024-11-17T01:47:08.687Z] 4808.00 IOPS, 18.78 MiB/s [2024-11-17T01:47:08.687Z] 5208.60 IOPS, 20.35 MiB/s 00:27:00.228 Latency(us) 00:27:00.228 [2024-11-17T01:47:08.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.228 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:00.228 Verification LBA range: start 0x0 length 0x4000 00:27:00.228 NVMe0n1 : 10.01 5213.62 20.37 4105.48 0.00 13707.71 722.39 3019898.88 00:27:00.228 [2024-11-17T01:47:08.687Z] =================================================================================================================== 00:27:00.228 [2024-11-17T01:47:08.687Z] Total : 5213.62 20.37 4105.48 0.00 13707.71 0.00 3019898.88 00:27:00.228 { 00:27:00.228 "results": [ 00:27:00.228 { 00:27:00.228 "job": "NVMe0n1", 00:27:00.228 "core_mask": "0x4", 00:27:00.228 "workload": "verify", 00:27:00.228 "status": "finished", 00:27:00.228 "verify_range": { 00:27:00.228 "start": 0, 00:27:00.228 "length": 16384 00:27:00.228 }, 00:27:00.228 "queue_depth": 128, 00:27:00.228 "io_size": 4096, 00:27:00.228 "runtime": 10.009556, 00:27:00.228 "iops": 5213.617866766518, 00:27:00.228 "mibps": 20.36569479205671, 00:27:00.228 "io_failed": 41094, 00:27:00.228 "io_timeout": 0, 00:27:00.228 "avg_latency_us": 13707.713267425543, 00:27:00.228 "min_latency_us": 722.3854545454545, 00:27:00.228 "max_latency_us": 3019898.88 00:27:00.228 } 00:27:00.228 ], 00:27:00.228 "core_count": 1 00:27:00.228 } 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@105 -- # killprocess 88291 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88291 ']' 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88291 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88291 00:27:00.228 killing process with pid 88291 00:27:00.228 Received shutdown signal, test time was about 10.000000 seconds 00:27:00.228 00:27:00.228 Latency(us) 00:27:00.228 [2024-11-17T01:47:08.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:00.228 [2024-11-17T01:47:08.687Z] =================================================================================================================== 00:27:00.228 [2024-11-17T01:47:08.687Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:00.228 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:00.229 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:00.229 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88291' 00:27:00.229 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88291 00:27:00.229 01:47:08 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88291 00:27:00.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@110 -- # bdevperf_pid=88539 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@112 -- # waitforlisten 88539 /var/tmp/bdevperf.sock 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@835 -- # '[' -z 88539 ']' 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.797 01:47:09 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:00.797 [2024-11-17 01:47:09.250978] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:00.797 [2024-11-17 01:47:09.251158] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88539 ] 00:27:01.056 [2024-11-17 01:47:09.431216] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:01.315 [2024-11-17 01:47:09.519797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:01.315 [2024-11-17 01:47:09.668837] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:01.883 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.883 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@868 -- # return 0 00:27:01.883 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@116 -- # dtrace_pid=88556 00:27:01.883 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 88539 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:27:01.883 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:27:02.142 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.3 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:27:02.400 NVMe0n1 00:27:02.401 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@124 -- # rpc_pid=88595 00:27:02.401 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:02.401 01:47:10 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@125 -- # sleep 1 00:27:02.660 Running I/O for 10 seconds... 00:27:03.599 01:47:11 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:03.599 13589.00 IOPS, 53.08 MiB/s [2024-11-17T01:47:12.058Z] [2024-11-17 01:47:12.045648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.045963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.599 [2024-11-17 01:47:12.046712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046747] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046955] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.046990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047224] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047247] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047316] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047453] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.600 [2024-11-17 01:47:12.047763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047910] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047959] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.047985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.048012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.048023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.048035] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.048046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.048060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000005880 is same with the state(6) to be set 00:27:03.601 [2024-11-17 01:47:12.048185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.048222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.048258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:18328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.048273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:62288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:41240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:25904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:130648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:37888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:47000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:36928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:89624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:91296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:92984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:62856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:49056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:91088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:117560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:26432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:50096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:83880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:95504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.601 [2024-11-17 01:47:12.049752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.601 [2024-11-17 01:47:12.049769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.049781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.049798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:117264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.050096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.050611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:8640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.051149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.051758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:67848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:68016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:35952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:117376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:73920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:74472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:127000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:32624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:104624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:116968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:48520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.052942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:103736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.052986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:38608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:109216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:126120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:115240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.602 [2024-11-17 01:47:12.053826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:41848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.602 [2024-11-17 01:47:12.053873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.053894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.053908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.053925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.053937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.053954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.053967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.053984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:118792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.053996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.054015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.054028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.054045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.054058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.054074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:72280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.054086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.603 [2024-11-17 01:47:12.054102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:107896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.603 [2024-11-17 01:47:12.054130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:30504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:14328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:78416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:38816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:37328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:12464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:100656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.054947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.054960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:82128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:87512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:66496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:19528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:67536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:104696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:83248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.863 [2024-11-17 01:47:12.055343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:32360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.863 [2024-11-17 01:47:12.055355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:21832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:126024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:108232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:9432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:7584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:37088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:128832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.055783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.055801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:126216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.056490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.056951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:74880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.057384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.057770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:106848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.058295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.058681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:57152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:58944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:52584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:129248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:94520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:38720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:112752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.059957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:76808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.059986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:125760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:104600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:109272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:26112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.864 [2024-11-17 01:47:12.060224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002b280 is same with the state(6) to be set 00:27:03.864 [2024-11-17 01:47:12.060257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:03.864 [2024-11-17 01:47:12.060273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:03.864 [2024-11-17 01:47:12.060286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:118824 len:8 PRP1 0x0 PRP2 0x0 00:27:03.864 [2024-11-17 01:47:12.060301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.864 [2024-11-17 01:47:12.060633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.864 [2024-11-17 01:47:12.060655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.865 [2024-11-17 01:47:12.060673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.865 [2024-11-17 01:47:12.060685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.865 [2024-11-17 01:47:12.060699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.865 [2024-11-17 01:47:12.060711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.865 [2024-11-17 01:47:12.060726] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:03.865 [2024-11-17 01:47:12.060738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:03.865 [2024-11-17 01:47:12.060751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:03.865 [2024-11-17 01:47:12.061034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:03.865 [2024-11-17 01:47:12.061077] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:03.865 [2024-11-17 01:47:12.061230] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.865 [2024-11-17 01:47:12.061262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:03.865 [2024-11-17 01:47:12.061281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:03.865 [2024-11-17 01:47:12.061308] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:03.865 [2024-11-17 01:47:12.061337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:03.865 [2024-11-17 01:47:12.061351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:03.865 [2024-11-17 01:47:12.061368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:03.865 [2024-11-17 01:47:12.061383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:03.865 [2024-11-17 01:47:12.061399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:03.865 01:47:12 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@128 -- # wait 88595 00:27:05.777 7810.50 IOPS, 30.51 MiB/s [2024-11-17T01:47:14.236Z] 5207.00 IOPS, 20.34 MiB/s [2024-11-17T01:47:14.236Z] [2024-11-17 01:47:14.061599] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:05.777 [2024-11-17 01:47:14.061676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:05.777 [2024-11-17 01:47:14.061701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:05.777 [2024-11-17 01:47:14.061750] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:05.777 [2024-11-17 01:47:14.061779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:05.777 [2024-11-17 01:47:14.061793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:05.777 [2024-11-17 01:47:14.061814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:05.777 [2024-11-17 01:47:14.061842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:05.777 [2024-11-17 01:47:14.061877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:07.650 3905.25 IOPS, 15.25 MiB/s [2024-11-17T01:47:16.109Z] 3124.20 IOPS, 12.20 MiB/s [2024-11-17T01:47:16.109Z] [2024-11-17 01:47:16.062065] uring.c: 664:uring_sock_create: *ERROR*: connect() failed, errno = 111 00:27:07.650 [2024-11-17 01:47:16.062138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x61500002a880 with addr=10.0.0.3, port=4420 00:27:07.650 [2024-11-17 01:47:16.062164] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x61500002a880 is same with the state(6) to be set 00:27:07.650 [2024-11-17 01:47:16.062195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x61500002a880 (9): Bad file descriptor 00:27:07.650 [2024-11-17 01:47:16.062224] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:07.650 [2024-11-17 01:47:16.062238] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:07.650 [2024-11-17 01:47:16.062253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:07.650 [2024-11-17 01:47:16.062268] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:07.650 [2024-11-17 01:47:16.062284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:27:09.524 2603.50 IOPS, 10.17 MiB/s [2024-11-17T01:47:18.242Z] 2231.57 IOPS, 8.72 MiB/s [2024-11-17T01:47:18.242Z] [2024-11-17 01:47:18.062377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:27:09.783 [2024-11-17 01:47:18.062465] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Ctrlr is in error state 00:27:09.783 [2024-11-17 01:47:18.062483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] controller reinitialization failed 00:27:09.783 [2024-11-17 01:47:18.062503] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] already in failed state 00:27:09.783 [2024-11-17 01:47:18.062518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller failed. 00:27:10.720 1952.62 IOPS, 7.63 MiB/s 00:27:10.720 Latency(us) 00:27:10.720 [2024-11-17T01:47:19.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.720 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:27:10.720 NVMe0n1 : 8.17 1912.02 7.47 15.67 0.00 66441.73 8817.57 7046430.72 00:27:10.720 [2024-11-17T01:47:19.179Z] =================================================================================================================== 00:27:10.720 [2024-11-17T01:47:19.179Z] Total : 1912.02 7.47 15.67 0.00 66441.73 8817.57 7046430.72 00:27:10.720 { 00:27:10.720 "results": [ 00:27:10.720 { 00:27:10.720 "job": "NVMe0n1", 00:27:10.720 "core_mask": "0x4", 00:27:10.720 "workload": "randread", 00:27:10.720 "status": "finished", 00:27:10.720 "queue_depth": 128, 00:27:10.720 "io_size": 4096, 00:27:10.720 "runtime": 8.169881, 00:27:10.720 "iops": 1912.022953577904, 00:27:10.720 "mibps": 7.4688396624136875, 00:27:10.720 "io_failed": 128, 00:27:10.720 "io_timeout": 0, 00:27:10.720 "avg_latency_us": 66441.7271990718, 00:27:10.720 "min_latency_us": 8817.57090909091, 00:27:10.720 "max_latency_us": 7046430.72 00:27:10.720 } 00:27:10.720 ], 00:27:10.720 "core_count": 1 00:27:10.720 } 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:10.720 Attaching 5 probes... 00:27:10.720 1353.208978: reset bdev controller NVMe0 00:27:10.720 1353.335166: reconnect bdev controller NVMe0 00:27:10.720 3353.646406: reconnect delay bdev controller NVMe0 00:27:10.720 3353.681724: reconnect bdev controller NVMe0 00:27:10.720 5354.137976: reconnect delay bdev controller NVMe0 00:27:10.720 5354.172335: reconnect bdev controller NVMe0 00:27:10.720 7354.538846: reconnect delay bdev controller NVMe0 00:27:10.720 7354.574532: reconnect bdev controller NVMe0 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@136 -- # kill 88556 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@139 -- # killprocess 88539 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88539 ']' 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88539 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88539 00:27:10.720 killing process with pid 88539 00:27:10.720 Received shutdown signal, test time was about 8.239075 seconds 00:27:10.720 00:27:10.720 Latency(us) 00:27:10.720 [2024-11-17T01:47:19.179Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.720 [2024-11-17T01:47:19.179Z] =================================================================================================================== 00:27:10.720 [2024-11-17T01:47:19.179Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88539' 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88539 00:27:10.720 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88539 00:27:11.658 01:47:19 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- host/timeout.sh@145 -- # nvmftestfini 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@121 -- # sync 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@124 -- # set +e 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:11.917 rmmod nvme_tcp 00:27:11.917 rmmod nvme_fabrics 00:27:11.917 rmmod nvme_keyring 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@128 -- # set -e 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@129 -- # return 0 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@517 -- # '[' -n 88087 ']' 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@518 -- # killprocess 88087 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@954 -- # '[' -z 88087 ']' 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@958 -- # kill -0 88087 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # uname 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88087 00:27:11.917 killing process with pid 88087 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88087' 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@973 -- # kill 88087 00:27:11.917 01:47:20 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@978 -- # wait 88087 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@297 -- # iptr 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:27:12.855 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@246 -- # remove_spdk_ns 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- nvmf/common.sh@300 -- # return 0 00:27:13.114 ************************************ 00:27:13.114 END TEST nvmf_timeout 00:27:13.114 ************************************ 00:27:13.114 00:27:13.114 real 0m50.186s 00:27:13.114 user 2m26.276s 00:27:13.114 sys 0m5.337s 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host.nvmf_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ virt == phy ]] 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:13.114 00:27:13.114 real 6m22.286s 00:27:13.114 user 17m42.899s 00:27:13.114 sys 1m15.647s 00:27:13.114 ************************************ 00:27:13.114 END TEST nvmf_host 00:27:13.114 ************************************ 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.114 01:47:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.114 01:47:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:27:13.114 01:47:21 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 1 -eq 0 ]] 00:27:13.114 00:27:13.114 real 17m2.002s 00:27:13.114 user 44m19.095s 00:27:13.114 sys 4m4.549s 00:27:13.114 ************************************ 00:27:13.114 END TEST nvmf_tcp 00:27:13.114 ************************************ 00:27:13.114 01:47:21 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.114 01:47:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:13.373 01:47:21 -- spdk/autotest.sh@285 -- # [[ 1 -eq 0 ]] 00:27:13.373 01:47:21 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:13.373 01:47:21 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:13.373 01:47:21 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:13.373 01:47:21 -- common/autotest_common.sh@10 -- # set +x 00:27:13.373 ************************************ 00:27:13.373 START TEST nvmf_dif 00:27:13.373 ************************************ 00:27:13.373 01:47:21 nvmf_dif -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:27:13.373 * Looking for test storage... 00:27:13.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:27:13.373 01:47:21 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:13.373 01:47:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:13.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.374 --rc genhtml_branch_coverage=1 00:27:13.374 --rc genhtml_function_coverage=1 00:27:13.374 --rc genhtml_legend=1 00:27:13.374 --rc geninfo_all_blocks=1 00:27:13.374 --rc geninfo_unexecuted_blocks=1 00:27:13.374 00:27:13.374 ' 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:13.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.374 --rc genhtml_branch_coverage=1 00:27:13.374 --rc genhtml_function_coverage=1 00:27:13.374 --rc genhtml_legend=1 00:27:13.374 --rc geninfo_all_blocks=1 00:27:13.374 --rc geninfo_unexecuted_blocks=1 00:27:13.374 00:27:13.374 ' 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:13.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.374 --rc genhtml_branch_coverage=1 00:27:13.374 --rc genhtml_function_coverage=1 00:27:13.374 --rc genhtml_legend=1 00:27:13.374 --rc geninfo_all_blocks=1 00:27:13.374 --rc geninfo_unexecuted_blocks=1 00:27:13.374 00:27:13.374 ' 00:27:13.374 01:47:21 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:13.374 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:13.374 --rc genhtml_branch_coverage=1 00:27:13.374 --rc genhtml_function_coverage=1 00:27:13.374 --rc genhtml_legend=1 00:27:13.374 --rc geninfo_all_blocks=1 00:27:13.374 --rc geninfo_unexecuted_blocks=1 00:27:13.374 00:27:13.374 ' 00:27:13.374 01:47:21 nvmf_dif -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:13.374 01:47:21 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:13.374 01:47:21 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.374 01:47:21 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.374 01:47:21 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.374 01:47:21 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:27:13.374 01:47:21 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:13.374 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:13.374 01:47:21 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:13.374 01:47:21 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:27:13.374 01:47:21 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:27:13.374 01:47:21 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:27:13.374 01:47:21 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:27:13.633 01:47:21 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:27:13.633 01:47:21 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:13.634 01:47:21 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:13.634 01:47:21 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@460 -- # nvmf_veth_init 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:27:13.634 Cannot find device "nvmf_init_br" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@162 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:27:13.634 Cannot find device "nvmf_init_br2" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@163 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:27:13.634 Cannot find device "nvmf_tgt_br" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@164 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:27:13.634 Cannot find device "nvmf_tgt_br2" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@165 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:27:13.634 Cannot find device "nvmf_init_br" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@166 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:27:13.634 Cannot find device "nvmf_init_br2" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@167 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:27:13.634 Cannot find device "nvmf_tgt_br" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@168 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:27:13.634 Cannot find device "nvmf_tgt_br2" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@169 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:27:13.634 Cannot find device "nvmf_br" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@170 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:27:13.634 Cannot find device "nvmf_init_if" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@171 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:27:13.634 Cannot find device "nvmf_init_if2" 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@172 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:13.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@173 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:13.634 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@174 -- # true 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:27:13.634 01:47:21 nvmf_dif -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:13.634 01:47:22 nvmf_dif -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:13.634 01:47:22 nvmf_dif -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:13.634 01:47:22 nvmf_dif -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:13.634 01:47:22 nvmf_dif -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:13.634 01:47:22 nvmf_dif -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:27:13.634 01:47:22 nvmf_dif -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:27:13.893 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:13.893 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:27:13.893 00:27:13.893 --- 10.0.0.3 ping statistics --- 00:27:13.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.893 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:27:13.893 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:27:13.893 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.042 ms 00:27:13.893 00:27:13.893 --- 10.0.0.4 ping statistics --- 00:27:13.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.893 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:13.893 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:13.893 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:27:13.893 00:27:13.893 --- 10.0.0.1 ping statistics --- 00:27:13.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.893 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:27:13.893 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:13.893 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:27:13.893 00:27:13.893 --- 10.0.0.2 ping statistics --- 00:27:13.893 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:13.893 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@461 -- # return 0 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:27:13.893 01:47:22 nvmf_dif -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:14.151 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:14.151 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:14.151 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:27:14.410 01:47:22 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:14.411 01:47:22 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:27:14.411 01:47:22 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=89098 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:27:14.411 01:47:22 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 89098 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 89098 ']' 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:14.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:14.411 01:47:22 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:14.411 [2024-11-17 01:47:22.784034] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:27:14.411 [2024-11-17 01:47:22.784208] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:14.670 [2024-11-17 01:47:22.981363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:14.670 [2024-11-17 01:47:23.106292] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:14.670 [2024-11-17 01:47:23.106546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:14.670 [2024-11-17 01:47:23.106724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:14.670 [2024-11-17 01:47:23.106764] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:14.670 [2024-11-17 01:47:23.106784] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:14.670 [2024-11-17 01:47:23.108289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:14.929 [2024-11-17 01:47:23.292975] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:27:15.496 01:47:23 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:15.496 01:47:23 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:27:15.496 01:47:23 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:15.496 01:47:23 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 01:47:23 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:15.497 01:47:23 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:27:15.497 01:47:23 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 [2024-11-17 01:47:23.809761] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.497 01:47:23 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 ************************************ 00:27:15.497 START TEST fio_dif_1_default 00:27:15.497 ************************************ 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 bdev_null0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:15.497 [2024-11-17 01:47:23.861995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:15.497 { 00:27:15.497 "params": { 00:27:15.497 "name": "Nvme$subsystem", 00:27:15.497 "trtype": "$TEST_TRANSPORT", 00:27:15.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:15.497 "adrfam": "ipv4", 00:27:15.497 "trsvcid": "$NVMF_PORT", 00:27:15.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:15.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:15.497 "hdgst": ${hdgst:-false}, 00:27:15.497 "ddgst": ${ddgst:-false} 00:27:15.497 }, 00:27:15.497 "method": "bdev_nvme_attach_controller" 00:27:15.497 } 00:27:15.497 EOF 00:27:15.497 )") 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:15.497 "params": { 00:27:15.497 "name": "Nvme0", 00:27:15.497 "trtype": "tcp", 00:27:15.497 "traddr": "10.0.0.3", 00:27:15.497 "adrfam": "ipv4", 00:27:15.497 "trsvcid": "4420", 00:27:15.497 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:15.497 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:15.497 "hdgst": false, 00:27:15.497 "ddgst": false 00:27:15.497 }, 00:27:15.497 "method": "bdev_nvme_attach_controller" 00:27:15.497 }' 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1351 -- # break 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:15.497 01:47:23 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:15.756 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:15.756 fio-3.35 00:27:15.756 Starting 1 thread 00:27:27.985 00:27:27.985 filename0: (groupid=0, jobs=1): err= 0: pid=89162: Sun Nov 17 01:47:34 2024 00:27:27.985 read: IOPS=7910, BW=30.9MiB/s (32.4MB/s)(309MiB/10001msec) 00:27:27.985 slat (usec): min=7, max=114, avg= 9.89, stdev= 4.45 00:27:27.985 clat (usec): min=401, max=1454, avg=475.23, stdev=44.29 00:27:27.985 lat (usec): min=408, max=1469, avg=485.12, stdev=45.41 00:27:27.985 clat percentiles (usec): 00:27:27.985 | 1.00th=[ 408], 5.00th=[ 420], 10.00th=[ 429], 20.00th=[ 441], 00:27:27.985 | 30.00th=[ 449], 40.00th=[ 461], 50.00th=[ 469], 60.00th=[ 478], 00:27:27.985 | 70.00th=[ 490], 80.00th=[ 506], 90.00th=[ 529], 95.00th=[ 553], 00:27:27.985 | 99.00th=[ 619], 99.50th=[ 644], 99.90th=[ 693], 99.95th=[ 717], 00:27:27.985 | 99.99th=[ 1123] 00:27:27.985 bw ( KiB/s): min=30403, max=32608, per=100.00%, avg=31676.79, stdev=600.77, samples=19 00:27:27.985 iops : min= 7600, max= 8152, avg=7919.16, stdev=150.28, samples=19 00:27:27.985 lat (usec) : 500=77.76%, 750=22.21%, 1000=0.01% 00:27:27.985 lat (msec) : 2=0.01% 00:27:27.985 cpu : usr=86.13%, sys=11.97%, ctx=31, majf=0, minf=1062 00:27:27.985 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:27.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:27.985 issued rwts: total=79112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:27.985 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:27.985 00:27:27.985 Run status group 0 (all jobs): 00:27:27.985 READ: bw=30.9MiB/s (32.4MB/s), 30.9MiB/s-30.9MiB/s (32.4MB/s-32.4MB/s), io=309MiB (324MB), run=10001-10001msec 00:27:27.985 ----------------------------------------------------- 00:27:27.985 Suppressions used: 00:27:27.985 count bytes template 00:27:27.985 1 8 /usr/src/fio/parse.c 00:27:27.985 1 8 libtcmalloc_minimal.so 00:27:27.985 1 904 libcrypto.so 00:27:27.985 ----------------------------------------------------- 00:27:27.985 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:27.985 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 ************************************ 00:27:27.986 END TEST fio_dif_1_default 00:27:27.986 ************************************ 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 00:27:27.986 real 0m12.203s 00:27:27.986 user 0m10.378s 00:27:27.986 sys 0m1.537s 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 01:47:36 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:27.986 01:47:36 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:27.986 01:47:36 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 ************************************ 00:27:27.986 START TEST fio_dif_1_multi_subsystems 00:27:27.986 ************************************ 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 bdev_null0 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 [2024-11-17 01:47:36.113305] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 bdev_null1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.986 { 00:27:27.986 "params": { 00:27:27.986 "name": "Nvme$subsystem", 00:27:27.986 "trtype": "$TEST_TRANSPORT", 00:27:27.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.986 "adrfam": "ipv4", 00:27:27.986 "trsvcid": "$NVMF_PORT", 00:27:27.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.986 "hdgst": ${hdgst:-false}, 00:27:27.986 "ddgst": ${ddgst:-false} 00:27:27.986 }, 00:27:27.986 "method": "bdev_nvme_attach_controller" 00:27:27.986 } 00:27:27.986 EOF 00:27:27.986 )") 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:27.986 { 00:27:27.986 "params": { 00:27:27.986 "name": "Nvme$subsystem", 00:27:27.986 "trtype": "$TEST_TRANSPORT", 00:27:27.986 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:27.986 "adrfam": "ipv4", 00:27:27.986 "trsvcid": "$NVMF_PORT", 00:27:27.986 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:27.986 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:27.986 "hdgst": ${hdgst:-false}, 00:27:27.986 "ddgst": ${ddgst:-false} 00:27:27.986 }, 00:27:27.986 "method": "bdev_nvme_attach_controller" 00:27:27.986 } 00:27:27.986 EOF 00:27:27.986 )") 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:27:27.986 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:27.986 "params": { 00:27:27.986 "name": "Nvme0", 00:27:27.986 "trtype": "tcp", 00:27:27.986 "traddr": "10.0.0.3", 00:27:27.986 "adrfam": "ipv4", 00:27:27.986 "trsvcid": "4420", 00:27:27.986 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:27.986 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:27.986 "hdgst": false, 00:27:27.986 "ddgst": false 00:27:27.986 }, 00:27:27.987 "method": "bdev_nvme_attach_controller" 00:27:27.987 },{ 00:27:27.987 "params": { 00:27:27.987 "name": "Nvme1", 00:27:27.987 "trtype": "tcp", 00:27:27.987 "traddr": "10.0.0.3", 00:27:27.987 "adrfam": "ipv4", 00:27:27.987 "trsvcid": "4420", 00:27:27.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:27.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:27.987 "hdgst": false, 00:27:27.987 "ddgst": false 00:27:27.987 }, 00:27:27.987 "method": "bdev_nvme_attach_controller" 00:27:27.987 }' 00:27:27.987 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:27.987 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:27.987 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1351 -- # break 00:27:27.987 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:27.987 01:47:36 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:27.987 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:27.987 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:27.987 fio-3.35 00:27:27.987 Starting 2 threads 00:27:40.188 00:27:40.188 filename0: (groupid=0, jobs=1): err= 0: pid=89325: Sun Nov 17 01:47:47 2024 00:27:40.188 read: IOPS=4329, BW=16.9MiB/s (17.7MB/s)(169MiB/10001msec) 00:27:40.188 slat (nsec): min=7657, max=73175, avg=14456.05, stdev=4812.54 00:27:40.188 clat (usec): min=704, max=1560, avg=883.74, stdev=67.37 00:27:40.188 lat (usec): min=716, max=1623, avg=898.19, stdev=68.84 00:27:40.188 clat percentiles (usec): 00:27:40.188 | 1.00th=[ 750], 5.00th=[ 783], 10.00th=[ 799], 20.00th=[ 832], 00:27:40.188 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 881], 60.00th=[ 898], 00:27:40.188 | 70.00th=[ 914], 80.00th=[ 930], 90.00th=[ 971], 95.00th=[ 996], 00:27:40.188 | 99.00th=[ 1074], 99.50th=[ 1123], 99.90th=[ 1205], 99.95th=[ 1221], 00:27:40.188 | 99.99th=[ 1319] 00:27:40.188 bw ( KiB/s): min=16832, max=17728, per=50.00%, avg=17315.37, stdev=245.77, samples=19 00:27:40.188 iops : min= 4208, max= 4432, avg=4328.84, stdev=61.44, samples=19 00:27:40.188 lat (usec) : 750=1.10%, 1000=94.01% 00:27:40.188 lat (msec) : 2=4.89% 00:27:40.188 cpu : usr=91.30%, sys=7.42%, ctx=16, majf=0, minf=1061 00:27:40.188 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.188 issued rwts: total=43296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.188 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:40.188 filename1: (groupid=0, jobs=1): err= 0: pid=89326: Sun Nov 17 01:47:47 2024 00:27:40.188 read: IOPS=4329, BW=16.9MiB/s (17.7MB/s)(169MiB/10001msec) 00:27:40.188 slat (nsec): min=7510, max=72378, avg=14491.98, stdev=5023.44 00:27:40.188 clat (usec): min=571, max=1632, avg=883.10, stdev=56.05 00:27:40.188 lat (usec): min=582, max=1669, avg=897.59, stdev=57.00 00:27:40.188 clat percentiles (usec): 00:27:40.188 | 1.00th=[ 791], 5.00th=[ 816], 10.00th=[ 824], 20.00th=[ 840], 00:27:40.188 | 30.00th=[ 848], 40.00th=[ 865], 50.00th=[ 873], 60.00th=[ 889], 00:27:40.188 | 70.00th=[ 906], 80.00th=[ 922], 90.00th=[ 955], 95.00th=[ 988], 00:27:40.188 | 99.00th=[ 1074], 99.50th=[ 1106], 99.90th=[ 1172], 99.95th=[ 1188], 00:27:40.188 | 99.99th=[ 1418] 00:27:40.188 bw ( KiB/s): min=16832, max=17728, per=50.00%, avg=17315.47, stdev=245.66, samples=19 00:27:40.188 iops : min= 4208, max= 4432, avg=4328.84, stdev=61.44, samples=19 00:27:40.188 lat (usec) : 750=0.01%, 1000=96.41% 00:27:40.188 lat (msec) : 2=3.58% 00:27:40.188 cpu : usr=90.46%, sys=8.17%, ctx=112, majf=0, minf=1062 00:27:40.188 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:40.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.188 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:40.188 issued rwts: total=43296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:40.188 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:40.188 00:27:40.188 Run status group 0 (all jobs): 00:27:40.188 READ: bw=33.8MiB/s (35.5MB/s), 16.9MiB/s-16.9MiB/s (17.7MB/s-17.7MB/s), io=338MiB (355MB), run=10001-10001msec 00:27:40.188 ----------------------------------------------------- 00:27:40.188 Suppressions used: 00:27:40.188 count bytes template 00:27:40.188 2 16 /usr/src/fio/parse.c 00:27:40.188 1 8 libtcmalloc_minimal.so 00:27:40.188 1 904 libcrypto.so 00:27:40.188 ----------------------------------------------------- 00:27:40.188 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:40.188 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.189 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:40.189 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 ************************************ 00:27:40.189 END TEST fio_dif_1_multi_subsystems 00:27:40.189 ************************************ 00:27:40.189 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.189 00:27:40.189 real 0m12.372s 00:27:40.189 user 0m20.142s 00:27:40.189 sys 0m1.926s 00:27:40.189 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 01:47:48 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:40.189 01:47:48 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:40.189 01:47:48 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 ************************************ 00:27:40.189 START TEST fio_dif_rand_params 00:27:40.189 ************************************ 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 bdev_null0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:40.189 [2024-11-17 01:47:48.541601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:40.189 { 00:27:40.189 "params": { 00:27:40.189 "name": "Nvme$subsystem", 00:27:40.189 "trtype": "$TEST_TRANSPORT", 00:27:40.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:40.189 "adrfam": "ipv4", 00:27:40.189 "trsvcid": "$NVMF_PORT", 00:27:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:40.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:40.189 "hdgst": ${hdgst:-false}, 00:27:40.189 "ddgst": ${ddgst:-false} 00:27:40.189 }, 00:27:40.189 "method": "bdev_nvme_attach_controller" 00:27:40.189 } 00:27:40.189 EOF 00:27:40.189 )") 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:40.189 "params": { 00:27:40.189 "name": "Nvme0", 00:27:40.189 "trtype": "tcp", 00:27:40.189 "traddr": "10.0.0.3", 00:27:40.189 "adrfam": "ipv4", 00:27:40.189 "trsvcid": "4420", 00:27:40.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:40.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:40.189 "hdgst": false, 00:27:40.189 "ddgst": false 00:27:40.189 }, 00:27:40.189 "method": "bdev_nvme_attach_controller" 00:27:40.189 }' 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:40.189 01:47:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:40.448 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:40.448 ... 00:27:40.448 fio-3.35 00:27:40.448 Starting 3 threads 00:27:47.009 00:27:47.009 filename0: (groupid=0, jobs=1): err= 0: pid=89482: Sun Nov 17 01:47:54 2024 00:27:47.009 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5012msec) 00:27:47.009 slat (nsec): min=5327, max=61736, avg=18164.86, stdev=6116.42 00:27:47.009 clat (usec): min=11793, max=19623, avg=12301.93, stdev=611.31 00:27:47.009 lat (usec): min=11803, max=19649, avg=12320.10, stdev=611.90 00:27:47.009 clat percentiles (usec): 00:27:47.009 | 1.00th=[11863], 5.00th=[11863], 10.00th=[11863], 20.00th=[11994], 00:27:47.009 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:27:47.009 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:27:47.009 | 99.00th=[14353], 99.50th=[14615], 99.90th=[19530], 99.95th=[19530], 00:27:47.009 | 99.99th=[19530] 00:27:47.009 bw ( KiB/s): min=29952, max=32256, per=33.33%, avg=31104.00, stdev=652.67, samples=10 00:27:47.009 iops : min= 234, max= 252, avg=243.00, stdev= 5.10, samples=10 00:27:47.009 lat (msec) : 20=100.00% 00:27:47.009 cpu : usr=92.82%, sys=6.51%, ctx=33, majf=0, minf=1075 00:27:47.009 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.009 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.009 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:47.009 filename0: (groupid=0, jobs=1): err= 0: pid=89483: Sun Nov 17 01:47:54 2024 00:27:47.009 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5006msec) 00:27:47.009 slat (nsec): min=5476, max=63566, avg=18757.88, stdev=6902.01 00:27:47.009 clat (usec): min=11798, max=14713, avg=12284.88, stdev=498.51 00:27:47.009 lat (usec): min=11812, max=14734, avg=12303.64, stdev=499.26 00:27:47.009 clat percentiles (usec): 00:27:47.009 | 1.00th=[11863], 5.00th=[11863], 10.00th=[11863], 20.00th=[11994], 00:27:47.009 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:27:47.009 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:27:47.009 | 99.00th=[14222], 99.50th=[14353], 99.90th=[14746], 99.95th=[14746], 00:27:47.009 | 99.99th=[14746] 00:27:47.009 bw ( KiB/s): min=30012, max=31488, per=33.34%, avg=31110.00, stdev=529.07, samples=10 00:27:47.009 iops : min= 234, max= 246, avg=243.00, stdev= 4.24, samples=10 00:27:47.009 lat (msec) : 20=100.00% 00:27:47.009 cpu : usr=92.25%, sys=6.93%, ctx=171, majf=0, minf=1073 00:27:47.009 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.009 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.009 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.009 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.009 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:47.009 filename0: (groupid=0, jobs=1): err= 0: pid=89484: Sun Nov 17 01:47:54 2024 00:27:47.009 read: IOPS=243, BW=30.4MiB/s (31.9MB/s)(152MiB/5007msec) 00:27:47.009 slat (nsec): min=5404, max=80001, avg=19101.18, stdev=7612.86 00:27:47.009 clat (usec): min=11727, max=15148, avg=12286.62, stdev=510.96 00:27:47.009 lat (usec): min=11736, max=15176, avg=12305.72, stdev=511.78 00:27:47.009 clat percentiles (usec): 00:27:47.009 | 1.00th=[11863], 5.00th=[11863], 10.00th=[11863], 20.00th=[11994], 00:27:47.009 | 30.00th=[11994], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:27:47.009 | 70.00th=[12387], 80.00th=[12518], 90.00th=[12911], 95.00th=[13435], 00:27:47.009 | 99.00th=[14353], 99.50th=[14484], 99.90th=[15139], 99.95th=[15139], 00:27:47.009 | 99.99th=[15139] 00:27:47.009 bw ( KiB/s): min=29952, max=31488, per=33.33%, avg=31104.00, stdev=543.06, samples=10 00:27:47.009 iops : min= 234, max= 246, avg=243.00, stdev= 4.24, samples=10 00:27:47.009 lat (msec) : 20=100.00% 00:27:47.009 cpu : usr=90.53%, sys=8.67%, ctx=93, majf=0, minf=1075 00:27:47.009 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.010 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.010 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.010 issued rwts: total=1218,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.010 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:47.010 00:27:47.010 Run status group 0 (all jobs): 00:27:47.010 READ: bw=91.1MiB/s (95.6MB/s), 30.4MiB/s-30.4MiB/s (31.9MB/s-31.9MB/s), io=457MiB (479MB), run=5006-5012msec 00:27:47.269 ----------------------------------------------------- 00:27:47.269 Suppressions used: 00:27:47.269 count bytes template 00:27:47.269 5 44 /usr/src/fio/parse.c 00:27:47.269 1 8 libtcmalloc_minimal.so 00:27:47.269 1 904 libcrypto.so 00:27:47.269 ----------------------------------------------------- 00:27:47.269 00:27:47.269 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:47.269 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 bdev_null0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 [2024-11-17 01:47:55.644605] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 bdev_null1 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 bdev_null2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.3 -s 4420 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.270 { 00:27:47.270 "params": { 00:27:47.270 "name": "Nvme$subsystem", 00:27:47.270 "trtype": "$TEST_TRANSPORT", 00:27:47.270 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.270 "adrfam": "ipv4", 00:27:47.270 "trsvcid": "$NVMF_PORT", 00:27:47.270 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.270 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.270 "hdgst": ${hdgst:-false}, 00:27:47.270 "ddgst": ${ddgst:-false} 00:27:47.270 }, 00:27:47.270 "method": "bdev_nvme_attach_controller" 00:27:47.270 } 00:27:47.270 EOF 00:27:47.270 )") 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:27:47.270 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:27:47.529 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:27:47.529 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:27:47.529 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.530 { 00:27:47.530 "params": { 00:27:47.530 "name": "Nvme$subsystem", 00:27:47.530 "trtype": "$TEST_TRANSPORT", 00:27:47.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.530 "adrfam": "ipv4", 00:27:47.530 "trsvcid": "$NVMF_PORT", 00:27:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.530 "hdgst": ${hdgst:-false}, 00:27:47.530 "ddgst": ${ddgst:-false} 00:27:47.530 }, 00:27:47.530 "method": "bdev_nvme_attach_controller" 00:27:47.530 } 00:27:47.530 EOF 00:27:47.530 )") 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:27:47.530 { 00:27:47.530 "params": { 00:27:47.530 "name": "Nvme$subsystem", 00:27:47.530 "trtype": "$TEST_TRANSPORT", 00:27:47.530 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:47.530 "adrfam": "ipv4", 00:27:47.530 "trsvcid": "$NVMF_PORT", 00:27:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:47.530 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:47.530 "hdgst": ${hdgst:-false}, 00:27:47.530 "ddgst": ${ddgst:-false} 00:27:47.530 }, 00:27:47.530 "method": "bdev_nvme_attach_controller" 00:27:47.530 } 00:27:47.530 EOF 00:27:47.530 )") 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:27:47.530 "params": { 00:27:47.530 "name": "Nvme0", 00:27:47.530 "trtype": "tcp", 00:27:47.530 "traddr": "10.0.0.3", 00:27:47.530 "adrfam": "ipv4", 00:27:47.530 "trsvcid": "4420", 00:27:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.530 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.530 "hdgst": false, 00:27:47.530 "ddgst": false 00:27:47.530 }, 00:27:47.530 "method": "bdev_nvme_attach_controller" 00:27:47.530 },{ 00:27:47.530 "params": { 00:27:47.530 "name": "Nvme1", 00:27:47.530 "trtype": "tcp", 00:27:47.530 "traddr": "10.0.0.3", 00:27:47.530 "adrfam": "ipv4", 00:27:47.530 "trsvcid": "4420", 00:27:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:47.530 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:47.530 "hdgst": false, 00:27:47.530 "ddgst": false 00:27:47.530 }, 00:27:47.530 "method": "bdev_nvme_attach_controller" 00:27:47.530 },{ 00:27:47.530 "params": { 00:27:47.530 "name": "Nvme2", 00:27:47.530 "trtype": "tcp", 00:27:47.530 "traddr": "10.0.0.3", 00:27:47.530 "adrfam": "ipv4", 00:27:47.530 "trsvcid": "4420", 00:27:47.530 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:47.530 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:47.530 "hdgst": false, 00:27:47.530 "ddgst": false 00:27:47.530 }, 00:27:47.530 "method": "bdev_nvme_attach_controller" 00:27:47.530 }' 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:27:47.530 01:47:55 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:47.530 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:47.530 ... 00:27:47.530 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:47.530 ... 00:27:47.530 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:47.530 ... 00:27:47.530 fio-3.35 00:27:47.530 Starting 24 threads 00:27:59.741 00:27:59.741 filename0: (groupid=0, jobs=1): err= 0: pid=89584: Sun Nov 17 01:48:07 2024 00:27:59.741 read: IOPS=200, BW=800KiB/s (820kB/s)(8008KiB/10004msec) 00:27:59.741 slat (usec): min=5, max=8897, avg=46.15, stdev=456.67 00:27:59.741 clat (msec): min=4, max=165, avg=79.77, stdev=22.29 00:27:59.741 lat (msec): min=4, max=165, avg=79.81, stdev=22.30 00:27:59.741 clat percentiles (msec): 00:27:59.741 | 1.00th=[ 14], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 62], 00:27:59.741 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 88], 00:27:59.741 | 70.00th=[ 93], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 111], 00:27:59.741 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 155], 99.95th=[ 165], 00:27:59.741 | 99.99th=[ 165] 00:27:59.741 bw ( KiB/s): min= 528, max= 896, per=4.29%, avg=787.79, stdev=81.34, samples=19 00:27:59.741 iops : min= 132, max= 224, avg=196.95, stdev=20.33, samples=19 00:27:59.741 lat (msec) : 10=0.65%, 20=0.80%, 50=4.35%, 100=78.67%, 250=15.53% 00:27:59.741 cpu : usr=40.14%, sys=2.58%, ctx=1288, majf=0, minf=1074 00:27:59.742 IO depths : 1=0.1%, 2=0.3%, 4=1.1%, 8=83.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=86.9%, 8=12.8%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89585: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=185, BW=743KiB/s (761kB/s)(7476KiB/10057msec) 00:27:59.742 slat (usec): min=5, max=8032, avg=32.33, stdev=334.00 00:27:59.742 clat (msec): min=19, max=148, avg=85.84, stdev=21.96 00:27:59.742 lat (msec): min=19, max=148, avg=85.87, stdev=21.97 00:27:59.742 clat percentiles (msec): 00:27:59.742 | 1.00th=[ 24], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 65], 00:27:59.742 | 30.00th=[ 74], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 94], 00:27:59.742 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 108], 95.00th=[ 121], 00:27:59.742 | 99.00th=[ 146], 99.50th=[ 146], 99.90th=[ 148], 99.95th=[ 148], 00:27:59.742 | 99.99th=[ 148] 00:27:59.742 bw ( KiB/s): min= 592, max= 894, per=4.04%, avg=741.00, stdev=69.12, samples=20 00:27:59.742 iops : min= 148, max= 223, avg=185.20, stdev=17.24, samples=20 00:27:59.742 lat (msec) : 20=0.75%, 50=3.32%, 100=75.92%, 250=20.01% 00:27:59.742 cpu : usr=36.21%, sys=2.06%, ctx=1036, majf=0, minf=1075 00:27:59.742 IO depths : 1=0.1%, 2=0.9%, 4=3.5%, 8=79.4%, 16=16.2%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=88.4%, 8=10.8%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=1869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89586: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=176, BW=704KiB/s (721kB/s)(7068KiB/10036msec) 00:27:59.742 slat (usec): min=5, max=8033, avg=21.05, stdev=190.81 00:27:59.742 clat (msec): min=19, max=155, avg=90.61, stdev=21.48 00:27:59.742 lat (msec): min=19, max=155, avg=90.64, stdev=21.48 00:27:59.742 clat percentiles (msec): 00:27:59.742 | 1.00th=[ 22], 5.00th=[ 61], 10.00th=[ 61], 20.00th=[ 82], 00:27:59.742 | 30.00th=[ 86], 40.00th=[ 89], 50.00th=[ 94], 60.00th=[ 96], 00:27:59.742 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 112], 95.00th=[ 124], 00:27:59.742 | 99.00th=[ 144], 99.50th=[ 144], 99.90th=[ 157], 99.95th=[ 157], 00:27:59.742 | 99.99th=[ 157] 00:27:59.742 bw ( KiB/s): min= 560, max= 1017, per=3.83%, avg=702.35, stdev=101.95, samples=20 00:27:59.742 iops : min= 140, max= 254, avg=175.55, stdev=25.46, samples=20 00:27:59.742 lat (msec) : 20=0.11%, 50=3.68%, 100=69.33%, 250=26.88% 00:27:59.742 cpu : usr=35.95%, sys=2.17%, ctx=1067, majf=0, minf=1073 00:27:59.742 IO depths : 1=0.1%, 2=3.3%, 4=13.3%, 8=68.8%, 16=14.5%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=91.1%, 8=6.0%, 16=2.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=1767,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89587: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=201, BW=804KiB/s (824kB/s)(8092KiB/10059msec) 00:27:59.742 slat (usec): min=6, max=8034, avg=39.39, stdev=388.03 00:27:59.742 clat (msec): min=3, max=148, avg=79.25, stdev=27.31 00:27:59.742 lat (msec): min=3, max=148, avg=79.29, stdev=27.31 00:27:59.742 clat percentiles (msec): 00:27:59.742 | 1.00th=[ 5], 5.00th=[ 12], 10.00th=[ 52], 20.00th=[ 62], 00:27:59.742 | 30.00th=[ 68], 40.00th=[ 82], 50.00th=[ 87], 60.00th=[ 92], 00:27:59.742 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 106], 95.00th=[ 111], 00:27:59.742 | 99.00th=[ 142], 99.50th=[ 146], 99.90th=[ 148], 99.95th=[ 148], 00:27:59.742 | 99.99th=[ 148] 00:27:59.742 bw ( KiB/s): min= 584, max= 2032, per=4.39%, avg=805.20, stdev=296.50, samples=20 00:27:59.742 iops : min= 146, max= 508, avg=201.30, stdev=74.12, samples=20 00:27:59.742 lat (msec) : 4=0.69%, 10=4.05%, 20=1.68%, 50=3.36%, 100=74.54% 00:27:59.742 lat (msec) : 250=15.67% 00:27:59.742 cpu : usr=44.76%, sys=3.03%, ctx=1399, majf=0, minf=1075 00:27:59.742 IO depths : 1=0.3%, 2=1.3%, 4=4.1%, 8=78.5%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=88.5%, 8=10.6%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=2023,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89588: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=187, BW=749KiB/s (767kB/s)(7504KiB/10019msec) 00:27:59.742 slat (usec): min=5, max=8036, avg=35.34, stdev=307.03 00:27:59.742 clat (msec): min=23, max=151, avg=85.22, stdev=19.81 00:27:59.742 lat (msec): min=23, max=151, avg=85.26, stdev=19.82 00:27:59.742 clat percentiles (msec): 00:27:59.742 | 1.00th=[ 37], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 65], 00:27:59.742 | 30.00th=[ 71], 40.00th=[ 85], 50.00th=[ 89], 60.00th=[ 92], 00:27:59.742 | 70.00th=[ 95], 80.00th=[ 101], 90.00th=[ 108], 95.00th=[ 117], 00:27:59.742 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 153], 99.95th=[ 153], 00:27:59.742 | 99.99th=[ 153] 00:27:59.742 bw ( KiB/s): min= 640, max= 864, per=4.06%, avg=745.68, stdev=73.98, samples=19 00:27:59.742 iops : min= 160, max= 216, avg=186.42, stdev=18.49, samples=19 00:27:59.742 lat (msec) : 50=1.71%, 100=77.99%, 250=20.31% 00:27:59.742 cpu : usr=39.22%, sys=2.38%, ctx=1342, majf=0, minf=1072 00:27:59.742 IO depths : 1=0.1%, 2=1.9%, 4=7.4%, 8=76.0%, 16=14.7%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=88.8%, 8=9.6%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=1876,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89589: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=173, BW=695KiB/s (712kB/s)(6956KiB/10004msec) 00:27:59.742 slat (usec): min=5, max=8029, avg=25.22, stdev=216.59 00:27:59.742 clat (msec): min=4, max=164, avg=91.88, stdev=21.99 00:27:59.742 lat (msec): min=4, max=164, avg=91.91, stdev=21.99 00:27:59.742 clat percentiles (msec): 00:27:59.742 | 1.00th=[ 14], 5.00th=[ 57], 10.00th=[ 64], 20.00th=[ 79], 00:27:59.742 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 93], 60.00th=[ 96], 00:27:59.742 | 70.00th=[ 102], 80.00th=[ 108], 90.00th=[ 115], 95.00th=[ 122], 00:27:59.742 | 99.00th=[ 146], 99.50th=[ 150], 99.90th=[ 165], 99.95th=[ 165], 00:27:59.742 | 99.99th=[ 165] 00:27:59.742 bw ( KiB/s): min= 528, max= 824, per=3.68%, avg=675.79, stdev=74.35, samples=19 00:27:59.742 iops : min= 132, max= 206, avg=168.95, stdev=18.59, samples=19 00:27:59.742 lat (msec) : 10=0.69%, 20=0.92%, 50=1.67%, 100=64.17%, 250=32.55% 00:27:59.742 cpu : usr=42.41%, sys=2.41%, ctx=1519, majf=0, minf=1073 00:27:59.742 IO depths : 1=0.1%, 2=4.1%, 4=16.5%, 8=65.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=91.7%, 8=4.7%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=1739,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89590: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=195, BW=780KiB/s (799kB/s)(7840KiB/10047msec) 00:27:59.742 slat (usec): min=5, max=8033, avg=24.05, stdev=202.59 00:27:59.742 clat (msec): min=36, max=150, avg=81.86, stdev=19.37 00:27:59.742 lat (msec): min=36, max=150, avg=81.88, stdev=19.37 00:27:59.742 clat percentiles (msec): 00:27:59.742 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 60], 20.00th=[ 63], 00:27:59.742 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 85], 60.00th=[ 87], 00:27:59.742 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 111], 00:27:59.742 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 150], 99.95th=[ 150], 00:27:59.742 | 99.99th=[ 150] 00:27:59.742 bw ( KiB/s): min= 650, max= 872, per=4.23%, avg=776.95, stdev=57.07, samples=20 00:27:59.742 iops : min= 162, max= 218, avg=194.20, stdev=14.33, samples=20 00:27:59.742 lat (msec) : 50=3.01%, 100=80.61%, 250=16.38% 00:27:59.742 cpu : usr=36.78%, sys=2.13%, ctx=1274, majf=0, minf=1074 00:27:59.742 IO depths : 1=0.1%, 2=0.9%, 4=3.4%, 8=80.4%, 16=15.3%, 32=0.0%, >=64=0.0% 00:27:59.742 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 complete : 0=0.0%, 4=87.7%, 8=11.6%, 16=0.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.742 issued rwts: total=1960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.742 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.742 filename0: (groupid=0, jobs=1): err= 0: pid=89591: Sun Nov 17 01:48:07 2024 00:27:59.742 read: IOPS=184, BW=739KiB/s (757kB/s)(7412KiB/10029msec) 00:27:59.742 slat (usec): min=4, max=8037, avg=26.56, stdev=263.47 00:27:59.742 clat (msec): min=34, max=146, avg=86.34, stdev=19.94 00:27:59.743 lat (msec): min=34, max=146, avg=86.36, stdev=19.94 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 47], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 63], 00:27:59.743 | 30.00th=[ 72], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 96], 00:27:59.743 | 70.00th=[ 96], 80.00th=[ 105], 90.00th=[ 108], 95.00th=[ 121], 00:27:59.743 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:59.743 | 99.99th=[ 146] 00:27:59.743 bw ( KiB/s): min= 638, max= 816, per=4.05%, avg=743.89, stdev=65.28, samples=19 00:27:59.743 iops : min= 159, max= 204, avg=185.95, stdev=16.37, samples=19 00:27:59.743 lat (msec) : 50=2.54%, 100=76.36%, 250=21.10% 00:27:59.743 cpu : usr=31.24%, sys=1.77%, ctx=843, majf=0, minf=1072 00:27:59.743 IO depths : 1=0.1%, 2=1.9%, 4=7.5%, 8=75.7%, 16=14.8%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=89.0%, 8=9.4%, 16=1.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=1853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89592: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=191, BW=765KiB/s (783kB/s)(7664KiB/10018msec) 00:27:59.743 slat (usec): min=4, max=8032, avg=26.17, stdev=258.95 00:27:59.743 clat (msec): min=23, max=153, avg=83.50, stdev=20.21 00:27:59.743 lat (msec): min=23, max=153, avg=83.53, stdev=20.21 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 39], 5.00th=[ 57], 10.00th=[ 61], 20.00th=[ 62], 00:27:59.743 | 30.00th=[ 72], 40.00th=[ 83], 50.00th=[ 85], 60.00th=[ 93], 00:27:59.743 | 70.00th=[ 96], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 116], 00:27:59.743 | 99.00th=[ 136], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:27:59.743 | 99.99th=[ 155] 00:27:59.743 bw ( KiB/s): min= 616, max= 872, per=4.14%, avg=759.63, stdev=74.12, samples=19 00:27:59.743 iops : min= 154, max= 218, avg=189.89, stdev=18.55, samples=19 00:27:59.743 lat (msec) : 50=3.44%, 100=80.58%, 250=15.97% 00:27:59.743 cpu : usr=31.16%, sys=1.88%, ctx=844, majf=0, minf=1074 00:27:59.743 IO depths : 1=0.1%, 2=1.3%, 4=5.2%, 8=78.5%, 16=15.0%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=88.1%, 8=10.8%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=1916,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89593: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=193, BW=773KiB/s (792kB/s)(7784KiB/10068msec) 00:27:59.743 slat (usec): min=5, max=8042, avg=22.74, stdev=203.48 00:27:59.743 clat (msec): min=5, max=153, avg=82.60, stdev=25.88 00:27:59.743 lat (msec): min=5, max=153, avg=82.62, stdev=25.87 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 6], 5.00th=[ 21], 10.00th=[ 58], 20.00th=[ 64], 00:27:59.743 | 30.00th=[ 73], 40.00th=[ 85], 50.00th=[ 87], 60.00th=[ 94], 00:27:59.743 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 113], 00:27:59.743 | 99.00th=[ 142], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:27:59.743 | 99.99th=[ 155] 00:27:59.743 bw ( KiB/s): min= 616, max= 1664, per=4.21%, avg=772.00, stdev=218.22, samples=20 00:27:59.743 iops : min= 154, max= 416, avg=193.00, stdev=54.55, samples=20 00:27:59.743 lat (msec) : 10=3.19%, 20=1.75%, 50=3.55%, 100=72.92%, 250=18.60% 00:27:59.743 cpu : usr=34.29%, sys=1.90%, ctx=974, majf=0, minf=1072 00:27:59.743 IO depths : 1=0.2%, 2=1.4%, 4=4.9%, 8=77.7%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=88.9%, 8=10.0%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89594: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=198, BW=793KiB/s (812kB/s)(7940KiB/10010msec) 00:27:59.743 slat (usec): min=4, max=8039, avg=40.67, stdev=391.73 00:27:59.743 clat (msec): min=21, max=165, avg=80.48, stdev=20.18 00:27:59.743 lat (msec): min=21, max=166, avg=80.52, stdev=20.18 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 46], 5.00th=[ 52], 10.00th=[ 58], 20.00th=[ 61], 00:27:59.743 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 88], 00:27:59.743 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 110], 00:27:59.743 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 167], 99.95th=[ 167], 00:27:59.743 | 99.99th=[ 167] 00:27:59.743 bw ( KiB/s): min= 624, max= 872, per=4.32%, avg=792.63, stdev=59.51, samples=19 00:27:59.743 iops : min= 156, max= 218, avg=198.16, stdev=14.88, samples=19 00:27:59.743 lat (msec) : 50=4.63%, 100=81.16%, 250=14.21% 00:27:59.743 cpu : usr=35.87%, sys=2.41%, ctx=1063, majf=0, minf=1073 00:27:59.743 IO depths : 1=0.1%, 2=0.3%, 4=1.2%, 8=83.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=86.9%, 8=12.9%, 16=0.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=1985,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89595: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=189, BW=759KiB/s (777kB/s)(7632KiB/10059msec) 00:27:59.743 slat (usec): min=5, max=8040, avg=28.82, stdev=242.99 00:27:59.743 clat (msec): min=25, max=149, avg=84.05, stdev=19.96 00:27:59.743 lat (msec): min=25, max=149, avg=84.08, stdev=19.97 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 40], 5.00th=[ 55], 10.00th=[ 61], 20.00th=[ 64], 00:27:59.743 | 30.00th=[ 71], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 92], 00:27:59.743 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 106], 95.00th=[ 114], 00:27:59.743 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:27:59.743 | 99.99th=[ 150] 00:27:59.743 bw ( KiB/s): min= 584, max= 897, per=4.12%, avg=756.75, stdev=75.81, samples=20 00:27:59.743 iops : min= 146, max= 224, avg=189.15, stdev=18.94, samples=20 00:27:59.743 lat (msec) : 50=2.99%, 100=80.40%, 250=16.61% 00:27:59.743 cpu : usr=42.95%, sys=2.37%, ctx=1315, majf=0, minf=1072 00:27:59.743 IO depths : 1=0.1%, 2=1.0%, 4=4.1%, 8=79.2%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=88.2%, 8=10.9%, 16=0.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=1908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89596: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=197, BW=789KiB/s (808kB/s)(7928KiB/10044msec) 00:27:59.743 slat (usec): min=4, max=8044, avg=27.66, stdev=270.30 00:27:59.743 clat (msec): min=29, max=146, avg=80.87, stdev=19.66 00:27:59.743 lat (msec): min=29, max=146, avg=80.90, stdev=19.65 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 44], 5.00th=[ 51], 10.00th=[ 57], 20.00th=[ 61], 00:27:59.743 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 85], 60.00th=[ 88], 00:27:59.743 | 70.00th=[ 95], 80.00th=[ 96], 90.00th=[ 106], 95.00th=[ 109], 00:27:59.743 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 146], 99.95th=[ 146], 00:27:59.743 | 99.99th=[ 146] 00:27:59.743 bw ( KiB/s): min= 664, max= 904, per=4.33%, avg=795.26, stdev=51.74, samples=19 00:27:59.743 iops : min= 166, max= 226, avg=198.79, stdev=12.94, samples=19 00:27:59.743 lat (msec) : 50=4.79%, 100=80.98%, 250=14.23% 00:27:59.743 cpu : usr=37.04%, sys=2.25%, ctx=1064, majf=0, minf=1073 00:27:59.743 IO depths : 1=0.1%, 2=0.2%, 4=0.9%, 8=83.0%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=87.1%, 8=12.7%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=1982,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89597: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=200, BW=804KiB/s (823kB/s)(8068KiB/10040msec) 00:27:59.743 slat (usec): min=5, max=8037, avg=37.61, stdev=303.98 00:27:59.743 clat (msec): min=26, max=149, avg=79.42, stdev=19.64 00:27:59.743 lat (msec): min=26, max=149, avg=79.46, stdev=19.64 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 44], 5.00th=[ 52], 10.00th=[ 57], 20.00th=[ 61], 00:27:59.743 | 30.00th=[ 64], 40.00th=[ 70], 50.00th=[ 81], 60.00th=[ 88], 00:27:59.743 | 70.00th=[ 92], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 109], 00:27:59.743 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:27:59.743 | 99.99th=[ 150] 00:27:59.743 bw ( KiB/s): min= 627, max= 952, per=4.36%, avg=800.60, stdev=71.02, samples=20 00:27:59.743 iops : min= 156, max= 238, avg=200.10, stdev=17.86, samples=20 00:27:59.743 lat (msec) : 50=3.42%, 100=83.24%, 250=13.34% 00:27:59.743 cpu : usr=39.81%, sys=2.51%, ctx=1326, majf=0, minf=1074 00:27:59.743 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.6%, 16=15.7%, 32=0.0%, >=64=0.0% 00:27:59.743 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 complete : 0=0.0%, 4=86.8%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.743 issued rwts: total=2017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.743 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.743 filename1: (groupid=0, jobs=1): err= 0: pid=89598: Sun Nov 17 01:48:07 2024 00:27:59.743 read: IOPS=185, BW=742KiB/s (760kB/s)(7468KiB/10058msec) 00:27:59.743 slat (usec): min=6, max=8037, avg=34.26, stdev=370.85 00:27:59.743 clat (msec): min=15, max=149, avg=85.92, stdev=21.76 00:27:59.743 lat (msec): min=15, max=149, avg=85.95, stdev=21.76 00:27:59.743 clat percentiles (msec): 00:27:59.743 | 1.00th=[ 26], 5.00th=[ 55], 10.00th=[ 60], 20.00th=[ 66], 00:27:59.743 | 30.00th=[ 78], 40.00th=[ 85], 50.00th=[ 88], 60.00th=[ 93], 00:27:59.743 | 70.00th=[ 96], 80.00th=[ 102], 90.00th=[ 109], 95.00th=[ 120], 00:27:59.743 | 99.00th=[ 144], 99.50th=[ 148], 99.90th=[ 150], 99.95th=[ 150], 00:27:59.743 | 99.99th=[ 150] 00:27:59.743 bw ( KiB/s): min= 552, max= 1129, per=4.03%, avg=739.95, stdev=114.92, samples=20 00:27:59.743 iops : min= 138, max= 282, avg=184.95, stdev=28.70, samples=20 00:27:59.743 lat (msec) : 20=0.11%, 50=4.61%, 100=73.97%, 250=21.32% 00:27:59.743 cpu : usr=34.29%, sys=2.08%, ctx=1084, majf=0, minf=1074 00:27:59.744 IO depths : 1=0.1%, 2=1.3%, 4=5.3%, 8=77.5%, 16=15.9%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=88.9%, 8=9.9%, 16=1.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=1867,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename1: (groupid=0, jobs=1): err= 0: pid=89599: Sun Nov 17 01:48:07 2024 00:27:59.744 read: IOPS=180, BW=722KiB/s (739kB/s)(7252KiB/10042msec) 00:27:59.744 slat (usec): min=5, max=4040, avg=25.10, stdev=163.40 00:27:59.744 clat (msec): min=33, max=149, avg=88.35, stdev=20.04 00:27:59.744 lat (msec): min=33, max=149, avg=88.37, stdev=20.04 00:27:59.744 clat percentiles (msec): 00:27:59.744 | 1.00th=[ 48], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 68], 00:27:59.744 | 30.00th=[ 77], 40.00th=[ 86], 50.00th=[ 91], 60.00th=[ 95], 00:27:59.744 | 70.00th=[ 97], 80.00th=[ 107], 90.00th=[ 111], 95.00th=[ 121], 00:27:59.744 | 99.00th=[ 140], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:27:59.744 | 99.99th=[ 150] 00:27:59.744 bw ( KiB/s): min= 512, max= 824, per=3.92%, avg=720.80, stdev=88.28, samples=20 00:27:59.744 iops : min= 128, max= 206, avg=180.20, stdev=22.07, samples=20 00:27:59.744 lat (msec) : 50=1.27%, 100=72.97%, 250=25.76% 00:27:59.744 cpu : usr=36.61%, sys=2.29%, ctx=1044, majf=0, minf=1073 00:27:59.744 IO depths : 1=0.1%, 2=2.6%, 4=10.3%, 8=72.7%, 16=14.3%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=89.7%, 8=8.0%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=1813,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename2: (groupid=0, jobs=1): err= 0: pid=89600: Sun Nov 17 01:48:07 2024 00:27:59.744 read: IOPS=199, BW=798KiB/s (817kB/s)(8020KiB/10046msec) 00:27:59.744 slat (usec): min=5, max=8041, avg=24.14, stdev=200.45 00:27:59.744 clat (msec): min=27, max=154, avg=79.91, stdev=19.85 00:27:59.744 lat (msec): min=27, max=154, avg=79.94, stdev=19.84 00:27:59.744 clat percentiles (msec): 00:27:59.744 | 1.00th=[ 46], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 61], 00:27:59.744 | 30.00th=[ 65], 40.00th=[ 71], 50.00th=[ 84], 60.00th=[ 87], 00:27:59.744 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 109], 00:27:59.744 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 155], 99.95th=[ 155], 00:27:59.744 | 99.99th=[ 155] 00:27:59.744 bw ( KiB/s): min= 616, max= 1016, per=4.33%, avg=795.55, stdev=80.01, samples=20 00:27:59.744 iops : min= 154, max= 254, avg=198.85, stdev=20.01, samples=20 00:27:59.744 lat (msec) : 50=5.24%, 100=81.25%, 250=13.52% 00:27:59.744 cpu : usr=37.43%, sys=2.15%, ctx=1040, majf=0, minf=1071 00:27:59.744 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.5%, 16=15.8%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=86.9%, 8=13.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename2: (groupid=0, jobs=1): err= 0: pid=89601: Sun Nov 17 01:48:07 2024 00:27:59.744 read: IOPS=181, BW=727KiB/s (744kB/s)(7276KiB/10011msec) 00:27:59.744 slat (usec): min=4, max=12022, avg=34.09, stdev=387.89 00:27:59.744 clat (msec): min=19, max=186, avg=87.76, stdev=20.97 00:27:59.744 lat (msec): min=23, max=186, avg=87.79, stdev=20.99 00:27:59.744 clat percentiles (msec): 00:27:59.744 | 1.00th=[ 36], 5.00th=[ 60], 10.00th=[ 61], 20.00th=[ 70], 00:27:59.744 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 93], 60.00th=[ 96], 00:27:59.744 | 70.00th=[ 96], 80.00th=[ 100], 90.00th=[ 108], 95.00th=[ 121], 00:27:59.744 | 99.00th=[ 153], 99.50th=[ 165], 99.90th=[ 176], 99.95th=[ 186], 00:27:59.744 | 99.99th=[ 186] 00:27:59.744 bw ( KiB/s): min= 512, max= 824, per=3.93%, avg=721.32, stdev=88.70, samples=19 00:27:59.744 iops : min= 128, max= 206, avg=180.32, stdev=22.16, samples=19 00:27:59.744 lat (msec) : 20=0.05%, 50=2.53%, 100=77.90%, 250=19.52% 00:27:59.744 cpu : usr=31.06%, sys=1.95%, ctx=835, majf=0, minf=1072 00:27:59.744 IO depths : 1=0.1%, 2=2.6%, 4=10.5%, 8=72.5%, 16=14.3%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=89.8%, 8=7.9%, 16=2.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=1819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename2: (groupid=0, jobs=1): err= 0: pid=89602: Sun Nov 17 01:48:07 2024 00:27:59.744 read: IOPS=199, BW=799KiB/s (818kB/s)(8008KiB/10022msec) 00:27:59.744 slat (usec): min=5, max=7030, avg=24.71, stdev=192.73 00:27:59.744 clat (msec): min=23, max=159, avg=79.95, stdev=19.63 00:27:59.744 lat (msec): min=23, max=159, avg=79.97, stdev=19.64 00:27:59.744 clat percentiles (msec): 00:27:59.744 | 1.00th=[ 47], 5.00th=[ 50], 10.00th=[ 58], 20.00th=[ 61], 00:27:59.744 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 84], 60.00th=[ 88], 00:27:59.744 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 104], 95.00th=[ 108], 00:27:59.744 | 99.00th=[ 138], 99.50th=[ 142], 99.90th=[ 161], 99.95th=[ 161], 00:27:59.744 | 99.99th=[ 161] 00:27:59.744 bw ( KiB/s): min= 720, max= 872, per=4.35%, avg=800.00, stdev=41.91, samples=19 00:27:59.744 iops : min= 180, max= 218, avg=200.00, stdev=10.48, samples=19 00:27:59.744 lat (msec) : 50=5.44%, 100=81.67%, 250=12.89% 00:27:59.744 cpu : usr=37.27%, sys=2.39%, ctx=1123, majf=0, minf=1073 00:27:59.744 IO depths : 1=0.1%, 2=0.2%, 4=0.7%, 8=83.4%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=86.8%, 8=13.0%, 16=0.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=2002,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename2: (groupid=0, jobs=1): err= 0: pid=89603: Sun Nov 17 01:48:07 2024 00:27:59.744 read: IOPS=210, BW=841KiB/s (862kB/s)(8468KiB/10063msec) 00:27:59.744 slat (usec): min=5, max=2141, avg=17.98, stdev=46.63 00:27:59.744 clat (usec): min=1996, max=154011, avg=75763.65, stdev=32723.95 00:27:59.744 lat (msec): min=2, max=154, avg=75.78, stdev=32.72 00:27:59.744 clat percentiles (msec): 00:27:59.744 | 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 6], 20.00th=[ 61], 00:27:59.744 | 30.00th=[ 67], 40.00th=[ 81], 50.00th=[ 85], 60.00th=[ 92], 00:27:59.744 | 70.00th=[ 96], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 114], 00:27:59.744 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:27:59.744 | 99.99th=[ 155] 00:27:59.744 bw ( KiB/s): min= 512, max= 2926, per=4.58%, avg=842.00, stdev=494.99, samples=20 00:27:59.744 iops : min= 128, max= 731, avg=210.45, stdev=123.64, samples=20 00:27:59.744 lat (msec) : 2=0.05%, 4=5.90%, 10=5.38%, 20=0.85%, 50=3.54% 00:27:59.744 lat (msec) : 100=68.40%, 250=15.87% 00:27:59.744 cpu : usr=37.35%, sys=2.38%, ctx=1297, majf=0, minf=1073 00:27:59.744 IO depths : 1=0.6%, 2=1.8%, 4=5.1%, 8=77.0%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=89.0%, 8=9.9%, 16=1.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=2117,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename2: (groupid=0, jobs=1): err= 0: pid=89604: Sun Nov 17 01:48:07 2024 00:27:59.744 read: IOPS=200, BW=803KiB/s (822kB/s)(8060KiB/10038msec) 00:27:59.744 slat (usec): min=5, max=8033, avg=22.40, stdev=178.66 00:27:59.744 clat (msec): min=36, max=147, avg=79.56, stdev=19.44 00:27:59.744 lat (msec): min=36, max=147, avg=79.58, stdev=19.45 00:27:59.744 clat percentiles (msec): 00:27:59.744 | 1.00th=[ 46], 5.00th=[ 51], 10.00th=[ 58], 20.00th=[ 61], 00:27:59.744 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 83], 60.00th=[ 87], 00:27:59.744 | 70.00th=[ 94], 80.00th=[ 96], 90.00th=[ 105], 95.00th=[ 108], 00:27:59.744 | 99.00th=[ 136], 99.50th=[ 142], 99.90th=[ 148], 99.95th=[ 148], 00:27:59.744 | 99.99th=[ 148] 00:27:59.744 bw ( KiB/s): min= 696, max= 888, per=4.36%, avg=800.80, stdev=50.66, samples=20 00:27:59.744 iops : min= 174, max= 222, avg=200.20, stdev=12.66, samples=20 00:27:59.744 lat (msec) : 50=4.76%, 100=82.78%, 250=12.46% 00:27:59.744 cpu : usr=35.81%, sys=2.04%, ctx=1038, majf=0, minf=1072 00:27:59.744 IO depths : 1=0.1%, 2=0.1%, 4=0.5%, 8=83.7%, 16=15.6%, 32=0.0%, >=64=0.0% 00:27:59.744 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 complete : 0=0.0%, 4=86.7%, 8=13.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.744 issued rwts: total=2015,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.744 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.744 filename2: (groupid=0, jobs=1): err= 0: pid=89605: Sun Nov 17 01:48:07 2024 00:27:59.745 read: IOPS=194, BW=779KiB/s (798kB/s)(7828KiB/10050msec) 00:27:59.745 slat (usec): min=9, max=8036, avg=29.68, stdev=311.30 00:27:59.745 clat (msec): min=34, max=150, avg=81.95, stdev=19.80 00:27:59.745 lat (msec): min=34, max=150, avg=81.98, stdev=19.80 00:27:59.745 clat percentiles (msec): 00:27:59.745 | 1.00th=[ 42], 5.00th=[ 53], 10.00th=[ 59], 20.00th=[ 61], 00:27:59.745 | 30.00th=[ 69], 40.00th=[ 78], 50.00th=[ 85], 60.00th=[ 91], 00:27:59.745 | 70.00th=[ 95], 80.00th=[ 97], 90.00th=[ 107], 95.00th=[ 109], 00:27:59.745 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 150], 00:27:59.745 | 99.99th=[ 150] 00:27:59.745 bw ( KiB/s): min= 608, max= 952, per=4.23%, avg=776.05, stdev=81.57, samples=20 00:27:59.745 iops : min= 152, max= 238, avg=194.00, stdev=20.40, samples=20 00:27:59.745 lat (msec) : 50=4.29%, 100=79.05%, 250=16.66% 00:27:59.745 cpu : usr=32.24%, sys=2.24%, ctx=1025, majf=0, minf=1075 00:27:59.745 IO depths : 1=0.1%, 2=0.2%, 4=0.6%, 8=83.1%, 16=16.1%, 32=0.0%, >=64=0.0% 00:27:59.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.745 complete : 0=0.0%, 4=87.2%, 8=12.7%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.745 issued rwts: total=1957,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.745 filename2: (groupid=0, jobs=1): err= 0: pid=89606: Sun Nov 17 01:48:07 2024 00:27:59.745 read: IOPS=184, BW=738KiB/s (756kB/s)(7424KiB/10061msec) 00:27:59.745 slat (usec): min=5, max=8038, avg=25.01, stdev=247.34 00:27:59.745 clat (msec): min=23, max=177, avg=86.49, stdev=21.60 00:27:59.745 lat (msec): min=23, max=177, avg=86.52, stdev=21.60 00:27:59.745 clat percentiles (msec): 00:27:59.745 | 1.00th=[ 27], 5.00th=[ 53], 10.00th=[ 61], 20.00th=[ 67], 00:27:59.745 | 30.00th=[ 75], 40.00th=[ 85], 50.00th=[ 91], 60.00th=[ 94], 00:27:59.745 | 70.00th=[ 97], 80.00th=[ 105], 90.00th=[ 109], 95.00th=[ 120], 00:27:59.745 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 150], 99.95th=[ 178], 00:27:59.745 | 99.99th=[ 178] 00:27:59.745 bw ( KiB/s): min= 592, max= 1024, per=4.01%, avg=735.55, stdev=95.41, samples=20 00:27:59.745 iops : min= 148, max= 256, avg=183.85, stdev=23.86, samples=20 00:27:59.745 lat (msec) : 50=4.31%, 100=70.58%, 250=25.11% 00:27:59.745 cpu : usr=32.13%, sys=2.03%, ctx=1046, majf=0, minf=1074 00:27:59.745 IO depths : 1=0.1%, 2=1.6%, 4=6.1%, 8=76.7%, 16=15.5%, 32=0.0%, >=64=0.0% 00:27:59.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.745 complete : 0=0.0%, 4=88.9%, 8=9.7%, 16=1.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.745 issued rwts: total=1856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.745 filename2: (groupid=0, jobs=1): err= 0: pid=89607: Sun Nov 17 01:48:07 2024 00:27:59.745 read: IOPS=188, BW=754KiB/s (773kB/s)(7572KiB/10036msec) 00:27:59.745 slat (usec): min=5, max=8035, avg=34.03, stdev=319.02 00:27:59.745 clat (msec): min=20, max=147, avg=84.60, stdev=20.83 00:27:59.745 lat (msec): min=20, max=147, avg=84.63, stdev=20.83 00:27:59.745 clat percentiles (msec): 00:27:59.745 | 1.00th=[ 26], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 65], 00:27:59.745 | 30.00th=[ 73], 40.00th=[ 84], 50.00th=[ 87], 60.00th=[ 93], 00:27:59.745 | 70.00th=[ 96], 80.00th=[ 99], 90.00th=[ 108], 95.00th=[ 117], 00:27:59.745 | 99.00th=[ 142], 99.50th=[ 144], 99.90th=[ 148], 99.95th=[ 148], 00:27:59.745 | 99.99th=[ 148] 00:27:59.745 bw ( KiB/s): min= 552, max= 1017, per=4.09%, avg=751.55, stdev=89.94, samples=20 00:27:59.745 iops : min= 138, max= 254, avg=187.85, stdev=22.46, samples=20 00:27:59.745 lat (msec) : 50=4.33%, 100=77.65%, 250=18.01% 00:27:59.745 cpu : usr=36.98%, sys=2.42%, ctx=1128, majf=0, minf=1074 00:27:59.745 IO depths : 1=0.1%, 2=0.8%, 4=3.3%, 8=79.8%, 16=16.0%, 32=0.0%, >=64=0.0% 00:27:59.745 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.745 complete : 0=0.0%, 4=88.3%, 8=11.0%, 16=0.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:59.745 issued rwts: total=1893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:59.745 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:59.745 00:27:59.745 Run status group 0 (all jobs): 00:27:59.745 READ: bw=17.9MiB/s (18.8MB/s), 695KiB/s-841KiB/s (712kB/s-862kB/s), io=180MiB (189MB), run=10004-10068msec 00:27:59.745 ----------------------------------------------------- 00:27:59.745 Suppressions used: 00:27:59.745 count bytes template 00:27:59.745 45 402 /usr/src/fio/parse.c 00:27:59.745 1 8 libtcmalloc_minimal.so 00:27:59.745 1 904 libcrypto.so 00:27:59.745 ----------------------------------------------------- 00:27:59.745 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:59.745 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.033 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 bdev_null0 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 [2024-11-17 01:48:08.258231] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 bdev_null1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:00.034 { 00:28:00.034 "params": { 00:28:00.034 "name": "Nvme$subsystem", 00:28:00.034 "trtype": "$TEST_TRANSPORT", 00:28:00.034 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.034 "adrfam": "ipv4", 00:28:00.034 "trsvcid": "$NVMF_PORT", 00:28:00.034 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.034 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.034 "hdgst": ${hdgst:-false}, 00:28:00.034 "ddgst": ${ddgst:-false} 00:28:00.034 }, 00:28:00.034 "method": "bdev_nvme_attach_controller" 00:28:00.034 } 00:28:00.034 EOF 00:28:00.034 )") 00:28:00.034 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:00.035 { 00:28:00.035 "params": { 00:28:00.035 "name": "Nvme$subsystem", 00:28:00.035 "trtype": "$TEST_TRANSPORT", 00:28:00.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.035 "adrfam": "ipv4", 00:28:00.035 "trsvcid": "$NVMF_PORT", 00:28:00.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.035 "hdgst": ${hdgst:-false}, 00:28:00.035 "ddgst": ${ddgst:-false} 00:28:00.035 }, 00:28:00.035 "method": "bdev_nvme_attach_controller" 00:28:00.035 } 00:28:00.035 EOF 00:28:00.035 )") 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:00.035 "params": { 00:28:00.035 "name": "Nvme0", 00:28:00.035 "trtype": "tcp", 00:28:00.035 "traddr": "10.0.0.3", 00:28:00.035 "adrfam": "ipv4", 00:28:00.035 "trsvcid": "4420", 00:28:00.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:00.035 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:00.035 "hdgst": false, 00:28:00.035 "ddgst": false 00:28:00.035 }, 00:28:00.035 "method": "bdev_nvme_attach_controller" 00:28:00.035 },{ 00:28:00.035 "params": { 00:28:00.035 "name": "Nvme1", 00:28:00.035 "trtype": "tcp", 00:28:00.035 "traddr": "10.0.0.3", 00:28:00.035 "adrfam": "ipv4", 00:28:00.035 "trsvcid": "4420", 00:28:00.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.035 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.035 "hdgst": false, 00:28:00.035 "ddgst": false 00:28:00.035 }, 00:28:00.035 "method": "bdev_nvme_attach_controller" 00:28:00.035 }' 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1351 -- # break 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:00.035 01:48:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:00.302 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:00.302 ... 00:28:00.302 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:28:00.302 ... 00:28:00.302 fio-3.35 00:28:00.302 Starting 4 threads 00:28:06.863 00:28:06.863 filename0: (groupid=0, jobs=1): err= 0: pid=89738: Sun Nov 17 01:48:14 2024 00:28:06.863 read: IOPS=1859, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5001msec) 00:28:06.863 slat (nsec): min=5178, max=70955, avg=15570.13, stdev=5127.81 00:28:06.863 clat (usec): min=811, max=14282, avg=4249.90, stdev=903.19 00:28:06.863 lat (usec): min=821, max=14308, avg=4265.47, stdev=903.39 00:28:06.863 clat percentiles (usec): 00:28:06.863 | 1.00th=[ 1631], 5.00th=[ 2474], 10.00th=[ 2704], 20.00th=[ 3654], 00:28:06.863 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:28:06.863 | 70.00th=[ 4752], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:28:06.863 | 99.00th=[ 5538], 99.50th=[ 5604], 99.90th=[ 6259], 99.95th=[10290], 00:28:06.863 | 99.99th=[14222] 00:28:06.863 bw ( KiB/s): min=12432, max=17008, per=26.28%, avg=15146.44, stdev=1527.71, samples=9 00:28:06.863 iops : min= 1554, max= 2126, avg=1893.22, stdev=190.86, samples=9 00:28:06.863 lat (usec) : 1000=0.02% 00:28:06.863 lat (msec) : 2=1.86%, 4=18.91%, 10=79.12%, 20=0.09% 00:28:06.863 cpu : usr=91.80%, sys=7.24%, ctx=35, majf=0, minf=1075 00:28:06.863 IO depths : 1=0.1%, 2=15.2%, 4=55.4%, 8=29.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 issued rwts: total=9298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:06.863 filename0: (groupid=0, jobs=1): err= 0: pid=89739: Sun Nov 17 01:48:14 2024 00:28:06.863 read: IOPS=1761, BW=13.8MiB/s (14.4MB/s)(68.8MiB/5002msec) 00:28:06.863 slat (nsec): min=5248, max=63448, avg=16119.27, stdev=5087.10 00:28:06.863 clat (usec): min=1271, max=8813, avg=4482.84, stdev=775.32 00:28:06.863 lat (usec): min=1285, max=8835, avg=4498.95, stdev=775.45 00:28:06.863 clat percentiles (usec): 00:28:06.863 | 1.00th=[ 1729], 5.00th=[ 2540], 10.00th=[ 3916], 20.00th=[ 4293], 00:28:06.863 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4686], 00:28:06.863 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:28:06.863 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 8455], 00:28:06.863 | 99.99th=[ 8848] 00:28:06.863 bw ( KiB/s): min=12288, max=15840, per=23.88%, avg=13766.22, stdev=1273.39, samples=9 00:28:06.863 iops : min= 1536, max= 1980, avg=1720.78, stdev=159.17, samples=9 00:28:06.863 lat (msec) : 2=1.29%, 4=8.80%, 10=89.91% 00:28:06.863 cpu : usr=91.34%, sys=7.72%, ctx=117, majf=0, minf=1073 00:28:06.863 IO depths : 1=0.1%, 2=19.8%, 4=52.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 issued rwts: total=8810,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:06.863 filename1: (groupid=0, jobs=1): err= 0: pid=89740: Sun Nov 17 01:48:14 2024 00:28:06.863 read: IOPS=1762, BW=13.8MiB/s (14.4MB/s)(68.9MiB/5002msec) 00:28:06.863 slat (nsec): min=5271, max=67372, avg=16264.40, stdev=5097.45 00:28:06.863 clat (usec): min=1285, max=7853, avg=4479.31, stdev=775.11 00:28:06.863 lat (usec): min=1301, max=7874, avg=4495.58, stdev=774.96 00:28:06.863 clat percentiles (usec): 00:28:06.863 | 1.00th=[ 1729], 5.00th=[ 2540], 10.00th=[ 3621], 20.00th=[ 4293], 00:28:06.863 | 30.00th=[ 4293], 40.00th=[ 4359], 50.00th=[ 4424], 60.00th=[ 4686], 00:28:06.863 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:28:06.863 | 99.00th=[ 5735], 99.50th=[ 6063], 99.90th=[ 6718], 99.95th=[ 7570], 00:28:06.863 | 99.99th=[ 7832] 00:28:06.863 bw ( KiB/s): min=12288, max=15776, per=23.89%, avg=13770.67, stdev=1251.12, samples=9 00:28:06.863 iops : min= 1536, max= 1972, avg=1721.33, stdev=156.39, samples=9 00:28:06.863 lat (msec) : 2=1.35%, 4=8.82%, 10=89.83% 00:28:06.863 cpu : usr=91.56%, sys=7.56%, ctx=37, majf=0, minf=1074 00:28:06.863 IO depths : 1=0.1%, 2=19.8%, 4=52.9%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 issued rwts: total=8817,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:06.863 filename1: (groupid=0, jobs=1): err= 0: pid=89741: Sun Nov 17 01:48:14 2024 00:28:06.863 read: IOPS=1824, BW=14.3MiB/s (14.9MB/s)(71.3MiB/5004msec) 00:28:06.863 slat (usec): min=3, max=184, avg=16.00, stdev= 5.92 00:28:06.863 clat (usec): min=1113, max=9680, avg=4326.58, stdev=840.85 00:28:06.863 lat (usec): min=1123, max=9702, avg=4342.58, stdev=840.51 00:28:06.863 clat percentiles (usec): 00:28:06.863 | 1.00th=[ 2409], 5.00th=[ 2507], 10.00th=[ 2769], 20.00th=[ 4228], 00:28:06.863 | 30.00th=[ 4293], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4555], 00:28:06.863 | 70.00th=[ 4817], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:28:06.863 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7242], 99.95th=[ 9503], 00:28:06.863 | 99.99th=[ 9634] 00:28:06.863 bw ( KiB/s): min=12288, max=17008, per=25.74%, avg=14839.11, stdev=1454.38, samples=9 00:28:06.863 iops : min= 1536, max= 2126, avg=1854.89, stdev=181.80, samples=9 00:28:06.863 lat (msec) : 2=0.12%, 4=17.63%, 10=82.25% 00:28:06.863 cpu : usr=90.47%, sys=8.30%, ctx=42, majf=0, minf=1075 00:28:06.863 IO depths : 1=0.1%, 2=16.8%, 4=54.5%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:06.863 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 complete : 0=0.0%, 4=93.5%, 8=6.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:06.863 issued rwts: total=9131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:06.863 latency : target=0, window=0, percentile=100.00%, depth=8 00:28:06.863 00:28:06.863 Run status group 0 (all jobs): 00:28:06.863 READ: bw=56.3MiB/s (59.0MB/s), 13.8MiB/s-14.5MiB/s (14.4MB/s-15.2MB/s), io=282MiB (295MB), run=5001-5004msec 00:28:07.123 ----------------------------------------------------- 00:28:07.123 Suppressions used: 00:28:07.123 count bytes template 00:28:07.123 6 52 /usr/src/fio/parse.c 00:28:07.123 1 8 libtcmalloc_minimal.so 00:28:07.123 1 904 libcrypto.so 00:28:07.123 ----------------------------------------------------- 00:28:07.123 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.123 ************************************ 00:28:07.123 END TEST fio_dif_rand_params 00:28:07.123 ************************************ 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.123 00:28:07.123 real 0m26.885s 00:28:07.123 user 2m6.298s 00:28:07.123 sys 0m9.101s 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:07.123 01:48:15 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:07.123 01:48:15 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:28:07.123 01:48:15 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:07.123 01:48:15 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:07.123 01:48:15 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:07.123 ************************************ 00:28:07.123 START TEST fio_dif_digest 00:28:07.123 ************************************ 00:28:07.123 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.124 bdev_null0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.3 -s 4420 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:07.124 [2024-11-17 01:48:15.487365] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:07.124 { 00:28:07.124 "params": { 00:28:07.124 "name": "Nvme$subsystem", 00:28:07.124 "trtype": "$TEST_TRANSPORT", 00:28:07.124 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:07.124 "adrfam": "ipv4", 00:28:07.124 "trsvcid": "$NVMF_PORT", 00:28:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:07.124 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:07.124 "hdgst": ${hdgst:-false}, 00:28:07.124 "ddgst": ${ddgst:-false} 00:28:07.124 }, 00:28:07.124 "method": "bdev_nvme_attach_controller" 00:28:07.124 } 00:28:07.124 EOF 00:28:07.124 )") 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:07.124 "params": { 00:28:07.124 "name": "Nvme0", 00:28:07.124 "trtype": "tcp", 00:28:07.124 "traddr": "10.0.0.3", 00:28:07.124 "adrfam": "ipv4", 00:28:07.124 "trsvcid": "4420", 00:28:07.124 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:07.124 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:07.124 "hdgst": true, 00:28:07.124 "ddgst": true 00:28:07.124 }, 00:28:07.124 "method": "bdev_nvme_attach_controller" 00:28:07.124 }' 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1351 -- # break 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:28:07.124 01:48:15 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:07.383 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:07.383 ... 00:28:07.383 fio-3.35 00:28:07.383 Starting 3 threads 00:28:19.586 00:28:19.586 filename0: (groupid=0, jobs=1): err= 0: pid=89851: Sun Nov 17 01:48:26 2024 00:28:19.586 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10013msec) 00:28:19.586 slat (nsec): min=6093, max=92308, avg=17714.14, stdev=6035.99 00:28:19.586 clat (usec): min=13975, max=21426, avg=14576.67, stdev=587.49 00:28:19.586 lat (usec): min=14005, max=21462, avg=14594.39, stdev=587.85 00:28:19.586 clat percentiles (usec): 00:28:19.586 | 1.00th=[14091], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:28:19.586 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:28:19.586 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15795], 00:28:19.586 | 99.00th=[16581], 99.50th=[16712], 99.90th=[21365], 99.95th=[21365], 00:28:19.586 | 99.99th=[21365] 00:28:19.586 bw ( KiB/s): min=25344, max=26880, per=33.33%, avg=26265.60, stdev=534.41, samples=20 00:28:19.586 iops : min= 198, max= 210, avg=205.20, stdev= 4.18, samples=20 00:28:19.586 lat (msec) : 20=99.85%, 50=0.15% 00:28:19.586 cpu : usr=92.13%, sys=7.30%, ctx=50, majf=0, minf=1075 00:28:19.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.586 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.586 filename0: (groupid=0, jobs=1): err= 0: pid=89852: Sun Nov 17 01:48:26 2024 00:28:19.586 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10008msec) 00:28:19.586 slat (nsec): min=5491, max=57708, avg=18405.52, stdev=6207.68 00:28:19.586 clat (usec): min=13976, max=16969, avg=14567.01, stdev=529.12 00:28:19.586 lat (usec): min=13990, max=16997, avg=14585.41, stdev=529.54 00:28:19.586 clat percentiles (usec): 00:28:19.586 | 1.00th=[14091], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:28:19.586 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:28:19.586 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15795], 00:28:19.586 | 99.00th=[16450], 99.50th=[16712], 99.90th=[16909], 99.95th=[16909], 00:28:19.586 | 99.99th=[16909] 00:28:19.586 bw ( KiB/s): min=25344, max=26880, per=33.39%, avg=26316.84, stdev=500.77, samples=19 00:28:19.586 iops : min= 198, max= 210, avg=205.58, stdev= 3.92, samples=19 00:28:19.586 lat (msec) : 20=100.00% 00:28:19.586 cpu : usr=91.86%, sys=7.59%, ctx=17, majf=0, minf=1072 00:28:19.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.586 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.586 filename0: (groupid=0, jobs=1): err= 0: pid=89853: Sun Nov 17 01:48:26 2024 00:28:19.586 read: IOPS=205, BW=25.7MiB/s (26.9MB/s)(257MiB/10009msec) 00:28:19.586 slat (nsec): min=5376, max=87185, avg=17921.88, stdev=5860.22 00:28:19.586 clat (usec): min=13966, max=17835, avg=14570.65, stdev=538.25 00:28:19.586 lat (usec): min=13980, max=17862, avg=14588.57, stdev=538.50 00:28:19.586 clat percentiles (usec): 00:28:19.586 | 1.00th=[14091], 5.00th=[14091], 10.00th=[14222], 20.00th=[14222], 00:28:19.586 | 30.00th=[14222], 40.00th=[14353], 50.00th=[14353], 60.00th=[14484], 00:28:19.586 | 70.00th=[14615], 80.00th=[14877], 90.00th=[15270], 95.00th=[15795], 00:28:19.586 | 99.00th=[16450], 99.50th=[16712], 99.90th=[17695], 99.95th=[17957], 00:28:19.586 | 99.99th=[17957] 00:28:19.586 bw ( KiB/s): min=25344, max=26880, per=33.39%, avg=26314.11, stdev=501.79, samples=19 00:28:19.586 iops : min= 198, max= 210, avg=205.58, stdev= 3.92, samples=19 00:28:19.586 lat (msec) : 20=100.00% 00:28:19.586 cpu : usr=92.65%, sys=6.81%, ctx=23, majf=0, minf=1074 00:28:19.586 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:19.586 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.586 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:19.586 issued rwts: total=2055,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:19.586 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:19.586 00:28:19.586 Run status group 0 (all jobs): 00:28:19.586 READ: bw=77.0MiB/s (80.7MB/s), 25.7MiB/s-25.7MiB/s (26.9MB/s-26.9MB/s), io=771MiB (808MB), run=10008-10013msec 00:28:19.586 ----------------------------------------------------- 00:28:19.586 Suppressions used: 00:28:19.586 count bytes template 00:28:19.586 5 44 /usr/src/fio/parse.c 00:28:19.586 1 8 libtcmalloc_minimal.so 00:28:19.586 1 904 libcrypto.so 00:28:19.586 ----------------------------------------------------- 00:28:19.586 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.586 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:19.586 00:28:19.586 real 0m12.253s 00:28:19.587 user 0m29.528s 00:28:19.587 sys 0m2.520s 00:28:19.587 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:19.587 01:48:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:28:19.587 ************************************ 00:28:19.587 END TEST fio_dif_digest 00:28:19.587 ************************************ 00:28:19.587 01:48:27 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:19.587 01:48:27 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.587 rmmod nvme_tcp 00:28:19.587 rmmod nvme_fabrics 00:28:19.587 rmmod nvme_keyring 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 89098 ']' 00:28:19.587 01:48:27 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 89098 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 89098 ']' 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 89098 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89098 00:28:19.587 killing process with pid 89098 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89098' 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@973 -- # kill 89098 00:28:19.587 01:48:27 nvmf_dif -- common/autotest_common.sh@978 -- # wait 89098 00:28:20.522 01:48:28 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:20.522 01:48:28 nvmf_dif -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:20.522 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:20.781 Waiting for block devices as requested 00:28:20.781 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:20.781 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:20.781 01:48:29 nvmf_dif -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.039 01:48:29 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:21.039 01:48:29 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.039 01:48:29 nvmf_dif -- nvmf/common.sh@300 -- # return 0 00:28:21.039 ************************************ 00:28:21.039 END TEST nvmf_dif 00:28:21.039 ************************************ 00:28:21.039 00:28:21.039 real 1m7.837s 00:28:21.039 user 4m3.464s 00:28:21.039 sys 0m19.732s 00:28:21.039 01:48:29 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.039 01:48:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:21.299 01:48:29 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:21.299 01:48:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:21.299 01:48:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.299 01:48:29 -- common/autotest_common.sh@10 -- # set +x 00:28:21.299 ************************************ 00:28:21.299 START TEST nvmf_abort_qd_sizes 00:28:21.299 ************************************ 00:28:21.299 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:21.299 * Looking for test storage... 00:28:21.300 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:21.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.300 --rc genhtml_branch_coverage=1 00:28:21.300 --rc genhtml_function_coverage=1 00:28:21.300 --rc genhtml_legend=1 00:28:21.300 --rc geninfo_all_blocks=1 00:28:21.300 --rc geninfo_unexecuted_blocks=1 00:28:21.300 00:28:21.300 ' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:21.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.300 --rc genhtml_branch_coverage=1 00:28:21.300 --rc genhtml_function_coverage=1 00:28:21.300 --rc genhtml_legend=1 00:28:21.300 --rc geninfo_all_blocks=1 00:28:21.300 --rc geninfo_unexecuted_blocks=1 00:28:21.300 00:28:21.300 ' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:21.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.300 --rc genhtml_branch_coverage=1 00:28:21.300 --rc genhtml_function_coverage=1 00:28:21.300 --rc genhtml_legend=1 00:28:21.300 --rc geninfo_all_blocks=1 00:28:21.300 --rc geninfo_unexecuted_blocks=1 00:28:21.300 00:28:21.300 ' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:21.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.300 --rc genhtml_branch_coverage=1 00:28:21.300 --rc genhtml_function_coverage=1 00:28:21.300 --rc genhtml_legend=1 00:28:21.300 --rc geninfo_all_blocks=1 00:28:21.300 --rc geninfo_unexecuted_blocks=1 00:28:21.300 00:28:21.300 ' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:21.300 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ virt != virt ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ no == yes ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # [[ virt == phy ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # [[ virt == phy-fallback ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@459 -- # [[ tcp == tcp ]] 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@460 -- # nvmf_veth_init 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@145 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # NVMF_SECOND_INITIATOR_IP=10.0.0.2 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@147 -- # NVMF_FIRST_TARGET_IP=10.0.0.3 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # NVMF_SECOND_TARGET_IP=10.0.0.4 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@149 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.300 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # NVMF_BRIDGE=nvmf_br 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@151 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # NVMF_INITIATOR_INTERFACE2=nvmf_init_if2 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@153 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # NVMF_INITIATOR_BRIDGE2=nvmf_init_br2 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@155 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@158 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # ip link set nvmf_init_br nomaster 00:28:21.301 Cannot find device "nvmf_init_br" 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # true 00:28:21.301 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # ip link set nvmf_init_br2 nomaster 00:28:21.560 Cannot find device "nvmf_init_br2" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # ip link set nvmf_tgt_br nomaster 00:28:21.560 Cannot find device "nvmf_tgt_br" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@164 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # ip link set nvmf_tgt_br2 nomaster 00:28:21.560 Cannot find device "nvmf_tgt_br2" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@165 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # ip link set nvmf_init_br down 00:28:21.560 Cannot find device "nvmf_init_br" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@166 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # ip link set nvmf_init_br2 down 00:28:21.560 Cannot find device "nvmf_init_br2" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@167 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # ip link set nvmf_tgt_br down 00:28:21.560 Cannot find device "nvmf_tgt_br" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@168 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # ip link set nvmf_tgt_br2 down 00:28:21.560 Cannot find device "nvmf_tgt_br2" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # ip link delete nvmf_br type bridge 00:28:21.560 Cannot find device "nvmf_br" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@170 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # ip link delete nvmf_init_if 00:28:21.560 Cannot find device "nvmf_init_if" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # ip link delete nvmf_init_if2 00:28:21.560 Cannot find device "nvmf_init_if2" 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:21.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@173 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:21.560 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@174 -- # true 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # ip netns add nvmf_tgt_ns_spdk 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@180 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@181 -- # ip link add nvmf_init_if2 type veth peer name nvmf_init_br2 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@187 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@190 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@191 -- # ip addr add 10.0.0.2/24 dev nvmf_init_if2 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.4/24 dev nvmf_tgt_if2 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@196 -- # ip link set nvmf_init_if up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@197 -- # ip link set nvmf_init_if2 up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@198 -- # ip link set nvmf_init_br up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@199 -- # ip link set nvmf_init_br2 up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@200 -- # ip link set nvmf_tgt_br up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@201 -- # ip link set nvmf_tgt_br2 up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@202 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@203 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:21.560 01:48:29 nvmf_abort_qd_sizes -- nvmf/common.sh@204 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:21.560 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@207 -- # ip link add nvmf_br type bridge 00:28:21.560 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # ip link set nvmf_br up 00:28:21.560 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@211 -- # ip link set nvmf_init_br master nvmf_br 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@212 -- # ip link set nvmf_init_br2 master nvmf_br 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@213 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@217 -- # ipts -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT' 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@218 -- # ipts -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i nvmf_init_if2 -p tcp --dport 4420 -j ACCEPT' 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@219 -- # ipts -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT -m comment --comment 'SPDK_NVMF:-A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT' 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@222 -- # ping -c 1 10.0.0.3 00:28:21.820 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:21.820 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:28:21.820 00:28:21.820 --- 10.0.0.3 ping statistics --- 00:28:21.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.820 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@223 -- # ping -c 1 10.0.0.4 00:28:21.820 PING 10.0.0.4 (10.0.0.4) 56(84) bytes of data. 00:28:21.820 64 bytes from 10.0.0.4: icmp_seq=1 ttl=64 time=0.039 ms 00:28:21.820 00:28:21.820 --- 10.0.0.4 ping statistics --- 00:28:21.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.820 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@224 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:21.820 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.820 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:28:21.820 00:28:21.820 --- 10.0.0.1 ping statistics --- 00:28:21.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.820 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@225 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.2 00:28:21.820 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.820 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:28:21.820 00:28:21.820 --- 10.0.0.2 ping statistics --- 00:28:21.820 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.820 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@461 -- # return 0 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:28:21.820 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:22.388 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:22.647 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.647 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:22.647 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.647 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:22.647 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:22.647 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.647 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:22.647 01:48:30 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=90522 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 90522 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 90522 ']' 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.647 01:48:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:22.906 [2024-11-17 01:48:31.150688] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:22.906 [2024-11-17 01:48:31.150873] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.906 [2024-11-17 01:48:31.341266] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:23.164 [2024-11-17 01:48:31.472116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:23.164 [2024-11-17 01:48:31.472426] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:23.164 [2024-11-17 01:48:31.472619] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:23.164 [2024-11-17 01:48:31.472953] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:23.164 [2024-11-17 01:48:31.473196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:23.164 [2024-11-17 01:48:31.475421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:23.164 [2024-11-17 01:48:31.475555] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:23.164 [2024-11-17 01:48:31.475710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:23.164 [2024-11-17 01:48:31.475748] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:23.422 [2024-11-17 01:48:31.686052] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:23.679 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:23.679 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:28:23.679 01:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:23.679 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:23.680 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n '' ]] 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@298 -- # local bdf= 00:28:23.938 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@233 -- # local class 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@234 -- # local subclass 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@235 -- # local progif 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # printf %02x 1 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@236 -- # class=01 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # printf %02x 8 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@237 -- # subclass=08 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # printf %02x 2 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@238 -- # progif=02 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@240 -- # hash lspci 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@243 -- # grep -i -- -p02 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@242 -- # lspci -mm -n -D 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@245 -- # tr -d '"' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@18 -- # local i 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@25 -- # [[ -z '' ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@27 -- # return 0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 2 )) 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:23.939 01:48:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:23.939 ************************************ 00:28:23.939 START TEST spdk_target_abort 00:28:23.939 ************************************ 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.939 spdk_targetn1 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.939 [2024-11-17 01:48:32.285741] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.3 -s 4420 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:23.939 [2024-11-17 01:48:32.332704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.3 4420 nqn.2016-06.io.spdk:testnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.3 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:23.939 01:48:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:27.225 Initializing NVMe Controllers 00:28:27.225 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:27.225 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:27.225 Initialization complete. Launching workers. 00:28:27.225 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8881, failed: 0 00:28:27.225 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1034, failed to submit 7847 00:28:27.225 success 855, unsuccessful 179, failed 0 00:28:27.225 01:48:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:27.225 01:48:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.409 Initializing NVMe Controllers 00:28:31.409 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:31.409 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:31.409 Initialization complete. Launching workers. 00:28:31.409 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8904, failed: 0 00:28:31.409 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1179, failed to submit 7725 00:28:31.409 success 399, unsuccessful 780, failed 0 00:28:31.409 01:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.409 01:48:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.3 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:34.693 Initializing NVMe Controllers 00:28:34.693 Attached to NVMe over Fabrics controller at 10.0.0.3:4420: nqn.2016-06.io.spdk:testnqn 00:28:34.693 Associating TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:34.693 Initialization complete. Launching workers. 00:28:34.693 NS: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27694, failed: 0 00:28:34.693 CTRLR: TCP (addr:10.0.0.3 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2193, failed to submit 25501 00:28:34.693 success 389, unsuccessful 1804, failed 0 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 90522 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 90522 ']' 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 90522 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90522 00:28:34.693 killing process with pid 90522 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90522' 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 90522 00:28:34.693 01:48:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 90522 00:28:35.260 00:28:35.260 real 0m11.344s 00:28:35.260 user 0m45.334s 00:28:35.260 sys 0m2.226s 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:35.260 ************************************ 00:28:35.260 END TEST spdk_target_abort 00:28:35.260 ************************************ 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:35.260 01:48:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:35.260 01:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:35.260 01:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:35.260 01:48:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:35.260 ************************************ 00:28:35.260 START TEST kernel_target_abort 00:28:35.260 ************************************ 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:35.260 01:48:43 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:35.519 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:35.519 Waiting for block devices as requested 00:28:35.778 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:35.778 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:28:36.038 No valid GPT data, bailing 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n2 ]] 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n2 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n2 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n2 pt 00:28:36.038 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:28:36.297 No valid GPT data, bailing 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n2 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n3 ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n3 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n3 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n3 pt 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:28:36.297 No valid GPT data, bailing 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n3 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme1n1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme1n1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme1n1 pt 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:28:36.297 No valid GPT data, bailing 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme1n1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme1n1 ]] 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme1n1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:36.297 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 --hostid=5af99618-86f8-46bf-8130-da23f42c5a81 -a 10.0.0.1 -t tcp -s 4420 00:28:36.297 00:28:36.297 Discovery Log Number of Records 2, Generation counter 2 00:28:36.297 =====Discovery Log Entry 0====== 00:28:36.297 trtype: tcp 00:28:36.297 adrfam: ipv4 00:28:36.297 subtype: current discovery subsystem 00:28:36.297 treq: not specified, sq flow control disable supported 00:28:36.297 portid: 1 00:28:36.297 trsvcid: 4420 00:28:36.297 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:36.297 traddr: 10.0.0.1 00:28:36.297 eflags: none 00:28:36.297 sectype: none 00:28:36.297 =====Discovery Log Entry 1====== 00:28:36.297 trtype: tcp 00:28:36.297 adrfam: ipv4 00:28:36.297 subtype: nvme subsystem 00:28:36.297 treq: not specified, sq flow control disable supported 00:28:36.297 portid: 1 00:28:36.297 trsvcid: 4420 00:28:36.298 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:36.298 traddr: 10.0.0.1 00:28:36.298 eflags: none 00:28:36.298 sectype: none 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:36.556 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:36.557 01:48:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:39.845 Initializing NVMe Controllers 00:28:39.845 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:39.845 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:39.845 Initialization complete. Launching workers. 00:28:39.845 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 24435, failed: 0 00:28:39.845 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24435, failed to submit 0 00:28:39.845 success 0, unsuccessful 24435, failed 0 00:28:39.845 01:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:39.845 01:48:48 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:43.130 Initializing NVMe Controllers 00:28:43.130 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:43.130 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:43.130 Initialization complete. Launching workers. 00:28:43.130 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 55148, failed: 0 00:28:43.130 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 22679, failed to submit 32469 00:28:43.130 success 0, unsuccessful 22679, failed 0 00:28:43.130 01:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:43.130 01:48:51 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:46.557 Initializing NVMe Controllers 00:28:46.557 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:46.557 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:46.557 Initialization complete. Launching workers. 00:28:46.557 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 59749, failed: 0 00:28:46.557 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14886, failed to submit 44863 00:28:46.557 success 0, unsuccessful 14886, failed 0 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:46.557 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:46.558 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:46.558 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:46.558 01:48:54 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:47.125 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:47.693 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:47.693 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:47.693 00:28:47.693 real 0m12.404s 00:28:47.693 user 0m6.411s 00:28:47.693 sys 0m3.657s 00:28:47.693 01:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:47.693 01:48:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:28:47.693 ************************************ 00:28:47.693 END TEST kernel_target_abort 00:28:47.693 ************************************ 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:47.693 rmmod nvme_tcp 00:28:47.693 rmmod nvme_fabrics 00:28:47.693 rmmod nvme_keyring 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:47.693 Process with pid 90522 is not found 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 90522 ']' 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 90522 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 90522 ']' 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 90522 00:28:47.693 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90522) - No such process 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 90522 is not found' 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:28:47.693 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:28:48.261 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:48.261 Waiting for block devices as requested 00:28:48.261 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:28:48.261 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # nvmf_veth_fini 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # ip link set nvmf_init_br nomaster 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # ip link set nvmf_init_br2 nomaster 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@235 -- # ip link set nvmf_tgt_br nomaster 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # ip link set nvmf_tgt_br2 nomaster 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # ip link set nvmf_init_br down 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # ip link set nvmf_init_br2 down 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@239 -- # ip link set nvmf_tgt_br down 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # ip link set nvmf_tgt_br2 down 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@241 -- # ip link delete nvmf_br type bridge 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # ip link delete nvmf_init_if 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # ip link delete nvmf_init_if2 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@246 -- # remove_spdk_ns 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:48.520 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:48.778 01:48:56 nvmf_abort_qd_sizes -- nvmf/common.sh@300 -- # return 0 00:28:48.778 00:28:48.778 real 0m27.485s 00:28:48.778 user 0m53.133s 00:28:48.778 sys 0m7.379s 00:28:48.778 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:48.778 ************************************ 00:28:48.779 END TEST nvmf_abort_qd_sizes 00:28:48.779 ************************************ 00:28:48.779 01:48:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:28:48.779 01:48:57 -- spdk/autotest.sh@292 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:48.779 01:48:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:48.779 01:48:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:48.779 01:48:57 -- common/autotest_common.sh@10 -- # set +x 00:28:48.779 ************************************ 00:28:48.779 START TEST keyring_file 00:28:48.779 ************************************ 00:28:48.779 01:48:57 keyring_file -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:28:48.779 * Looking for test storage... 00:28:48.779 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:28:48.779 01:48:57 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:48.779 01:48:57 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:28:48.779 01:48:57 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:48.779 01:48:57 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@345 -- # : 1 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@353 -- # local d=1 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:48.779 01:48:57 keyring_file -- scripts/common.sh@355 -- # echo 1 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@353 -- # local d=2 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@355 -- # echo 2 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@368 -- # return 0 00:28:49.038 01:48:57 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.038 01:48:57 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.038 --rc genhtml_branch_coverage=1 00:28:49.038 --rc genhtml_function_coverage=1 00:28:49.038 --rc genhtml_legend=1 00:28:49.038 --rc geninfo_all_blocks=1 00:28:49.038 --rc geninfo_unexecuted_blocks=1 00:28:49.038 00:28:49.038 ' 00:28:49.038 01:48:57 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.038 --rc genhtml_branch_coverage=1 00:28:49.038 --rc genhtml_function_coverage=1 00:28:49.038 --rc genhtml_legend=1 00:28:49.038 --rc geninfo_all_blocks=1 00:28:49.038 --rc geninfo_unexecuted_blocks=1 00:28:49.038 00:28:49.038 ' 00:28:49.038 01:48:57 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.038 --rc genhtml_branch_coverage=1 00:28:49.038 --rc genhtml_function_coverage=1 00:28:49.038 --rc genhtml_legend=1 00:28:49.038 --rc geninfo_all_blocks=1 00:28:49.038 --rc geninfo_unexecuted_blocks=1 00:28:49.038 00:28:49.038 ' 00:28:49.038 01:48:57 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:49.038 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.038 --rc genhtml_branch_coverage=1 00:28:49.038 --rc genhtml_function_coverage=1 00:28:49.038 --rc genhtml_legend=1 00:28:49.038 --rc geninfo_all_blocks=1 00:28:49.038 --rc geninfo_unexecuted_blocks=1 00:28:49.038 00:28:49.038 ' 00:28:49.038 01:48:57 keyring_file -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:28:49.038 01:48:57 keyring_file -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.038 01:48:57 keyring_file -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.038 01:48:57 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.039 01:48:57 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.039 01:48:57 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.039 01:48:57 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.039 01:48:57 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.039 01:48:57 keyring_file -- paths/export.sh@5 -- # export PATH 00:28:49.039 01:48:57 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@51 -- # : 0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:49.039 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.n5Nr6Jcn9s 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.n5Nr6Jcn9s 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.n5Nr6Jcn9s 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.n5Nr6Jcn9s 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@17 -- # name=key1 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.wsmFklMsRV 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:49.039 01:48:57 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.wsmFklMsRV 00:28:49.039 01:48:57 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.wsmFklMsRV 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.wsmFklMsRV 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@30 -- # tgtpid=91539 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:28:49.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.039 01:48:57 keyring_file -- keyring/file.sh@32 -- # waitforlisten 91539 00:28:49.039 01:48:57 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91539 ']' 00:28:49.039 01:48:57 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.039 01:48:57 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:49.039 01:48:57 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.039 01:48:57 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:49.039 01:48:57 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:49.298 [2024-11-17 01:48:57.525463] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:49.298 [2024-11-17 01:48:57.525851] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91539 ] 00:28:49.298 [2024-11-17 01:48:57.710036] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.556 [2024-11-17 01:48:57.818923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:49.556 [2024-11-17 01:48:57.998678] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:50.124 01:48:58 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:50.124 [2024-11-17 01:48:58.473060] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:50.124 null0 00:28:50.124 [2024-11-17 01:48:58.505097] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:50.124 [2024-11-17 01:48:58.505318] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.124 01:48:58 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:50.124 01:48:58 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:50.125 [2024-11-17 01:48:58.533140] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:28:50.125 request: 00:28:50.125 { 00:28:50.125 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.125 "secure_channel": false, 00:28:50.125 "listen_address": { 00:28:50.125 "trtype": "tcp", 00:28:50.125 "traddr": "127.0.0.1", 00:28:50.125 "trsvcid": "4420" 00:28:50.125 }, 00:28:50.125 "method": "nvmf_subsystem_add_listener", 00:28:50.125 "req_id": 1 00:28:50.125 } 00:28:50.125 Got JSON-RPC error response 00:28:50.125 response: 00:28:50.125 { 00:28:50.125 "code": -32602, 00:28:50.125 "message": "Invalid parameters" 00:28:50.125 } 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:50.125 01:48:58 keyring_file -- keyring/file.sh@47 -- # bperfpid=91552 00:28:50.125 01:48:58 keyring_file -- keyring/file.sh@49 -- # waitforlisten 91552 /var/tmp/bperf.sock 00:28:50.125 01:48:58 keyring_file -- keyring/file.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91552 ']' 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:50.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.125 01:48:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:28:50.384 [2024-11-17 01:48:58.657023] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:28:50.384 [2024-11-17 01:48:58.657432] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91552 ] 00:28:50.384 [2024-11-17 01:48:58.833312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:50.643 [2024-11-17 01:48:58.926848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.643 [2024-11-17 01:48:59.078539] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:28:51.211 01:48:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.211 01:48:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:28:51.211 01:48:59 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:51.211 01:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:51.470 01:48:59 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wsmFklMsRV 00:28:51.470 01:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wsmFklMsRV 00:28:51.728 01:48:59 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:28:51.728 01:48:59 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:28:51.728 01:48:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.728 01:48:59 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.728 01:48:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:51.987 01:49:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.n5Nr6Jcn9s == \/\t\m\p\/\t\m\p\.\n\5\N\r\6\J\c\n\9\s ]] 00:28:51.987 01:49:00 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:28:51.987 01:49:00 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:28:51.987 01:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:51.987 01:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:51.987 01:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:52.246 01:49:00 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.wsmFklMsRV == \/\t\m\p\/\t\m\p\.\w\s\m\F\k\l\M\s\R\V ]] 00:28:52.246 01:49:00 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:28:52.247 01:49:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:52.247 01:49:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.247 01:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.247 01:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:52.247 01:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.505 01:49:00 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:28:52.505 01:49:00 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:28:52.505 01:49:00 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:52.505 01:49:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:52.505 01:49:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:52.505 01:49:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:52.505 01:49:00 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:52.764 01:49:01 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:28:52.764 01:49:01 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:52.764 01:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:53.023 [2024-11-17 01:49:01.305857] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:53.023 nvme0n1 00:28:53.023 01:49:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:28:53.023 01:49:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:53.023 01:49:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.023 01:49:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.023 01:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.023 01:49:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:53.283 01:49:01 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:28:53.283 01:49:01 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:28:53.283 01:49:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:53.283 01:49:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:53.283 01:49:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:53.283 01:49:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:53.283 01:49:01 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:53.542 01:49:01 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:28:53.542 01:49:01 keyring_file -- keyring/file.sh@63 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.801 Running I/O for 1 seconds... 00:28:54.738 9661.00 IOPS, 37.74 MiB/s 00:28:54.738 Latency(us) 00:28:54.738 [2024-11-17T01:49:03.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:54.738 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:28:54.738 nvme0n1 : 1.01 9707.19 37.92 0.00 0.00 13143.04 6017.40 21448.15 00:28:54.738 [2024-11-17T01:49:03.197Z] =================================================================================================================== 00:28:54.738 [2024-11-17T01:49:03.197Z] Total : 9707.19 37.92 0.00 0.00 13143.04 6017.40 21448.15 00:28:54.738 { 00:28:54.738 "results": [ 00:28:54.738 { 00:28:54.738 "job": "nvme0n1", 00:28:54.738 "core_mask": "0x2", 00:28:54.738 "workload": "randrw", 00:28:54.738 "percentage": 50, 00:28:54.738 "status": "finished", 00:28:54.738 "queue_depth": 128, 00:28:54.738 "io_size": 4096, 00:28:54.738 "runtime": 1.008531, 00:28:54.738 "iops": 9707.18797934818, 00:28:54.738 "mibps": 37.91870304432883, 00:28:54.738 "io_failed": 0, 00:28:54.738 "io_timeout": 0, 00:28:54.738 "avg_latency_us": 13143.036671928685, 00:28:54.738 "min_latency_us": 6017.396363636363, 00:28:54.738 "max_latency_us": 21448.145454545454 00:28:54.738 } 00:28:54.738 ], 00:28:54.738 "core_count": 1 00:28:54.738 } 00:28:54.738 01:49:03 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:54.738 01:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:54.997 01:49:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:28:54.997 01:49:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:54.997 01:49:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:54.997 01:49:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:54.997 01:49:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:54.997 01:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.255 01:49:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:28:55.255 01:49:03 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:28:55.255 01:49:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:55.255 01:49:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.255 01:49:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.255 01:49:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:55.255 01:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:55.515 01:49:03 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:28:55.515 01:49:03 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:55.515 01:49:03 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.515 01:49:03 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:28:55.774 [2024-11-17 01:49:04.073936] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:28:55.774 [2024-11-17 01:49:04.074858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:28:55.774 [2024-11-17 01:49:04.075831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:28:55.774 [2024-11-17 01:49:04.076838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:28:55.774 [2024-11-17 01:49:04.076883] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:28:55.774 [2024-11-17 01:49:04.076902] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:28:55.774 [2024-11-17 01:49:04.076917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:28:55.774 request: 00:28:55.774 { 00:28:55.774 "name": "nvme0", 00:28:55.774 "trtype": "tcp", 00:28:55.774 "traddr": "127.0.0.1", 00:28:55.774 "adrfam": "ipv4", 00:28:55.774 "trsvcid": "4420", 00:28:55.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:55.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:55.774 "prchk_reftag": false, 00:28:55.774 "prchk_guard": false, 00:28:55.774 "hdgst": false, 00:28:55.775 "ddgst": false, 00:28:55.775 "psk": "key1", 00:28:55.775 "allow_unrecognized_csi": false, 00:28:55.775 "method": "bdev_nvme_attach_controller", 00:28:55.775 "req_id": 1 00:28:55.775 } 00:28:55.775 Got JSON-RPC error response 00:28:55.775 response: 00:28:55.775 { 00:28:55.775 "code": -5, 00:28:55.775 "message": "Input/output error" 00:28:55.775 } 00:28:55.775 01:49:04 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:55.775 01:49:04 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:55.775 01:49:04 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:55.775 01:49:04 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:55.775 01:49:04 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:28:55.775 01:49:04 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:55.775 01:49:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:55.775 01:49:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:55.775 01:49:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:55.775 01:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.034 01:49:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:28:56.034 01:49:04 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:28:56.034 01:49:04 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:28:56.034 01:49:04 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:56.034 01:49:04 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:56.034 01:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.034 01:49:04 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:28:56.292 01:49:04 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:28:56.292 01:49:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:28:56.292 01:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:56.551 01:49:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:28:56.551 01:49:04 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:28:56.810 01:49:05 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:28:56.810 01:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:56.810 01:49:05 keyring_file -- keyring/file.sh@78 -- # jq length 00:28:57.069 01:49:05 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:28:57.069 01:49:05 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.n5Nr6Jcn9s 00:28:57.069 01:49:05 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.069 01:49:05 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:57.069 01:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:57.327 [2024-11-17 01:49:05.583730] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.n5Nr6Jcn9s': 0100660 00:28:57.327 [2024-11-17 01:49:05.583781] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:28:57.327 request: 00:28:57.327 { 00:28:57.327 "name": "key0", 00:28:57.327 "path": "/tmp/tmp.n5Nr6Jcn9s", 00:28:57.327 "method": "keyring_file_add_key", 00:28:57.328 "req_id": 1 00:28:57.328 } 00:28:57.328 Got JSON-RPC error response 00:28:57.328 response: 00:28:57.328 { 00:28:57.328 "code": -1, 00:28:57.328 "message": "Operation not permitted" 00:28:57.328 } 00:28:57.328 01:49:05 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:57.328 01:49:05 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.328 01:49:05 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.328 01:49:05 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.328 01:49:05 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.n5Nr6Jcn9s 00:28:57.328 01:49:05 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:57.328 01:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.n5Nr6Jcn9s 00:28:57.586 01:49:05 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.n5Nr6Jcn9s 00:28:57.586 01:49:05 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:28:57.586 01:49:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:57.586 01:49:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:57.586 01:49:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:57.586 01:49:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:57.586 01:49:05 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:57.844 01:49:06 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:28:57.844 01:49:06 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:57.844 01:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:57.844 [2024-11-17 01:49:06.267954] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.n5Nr6Jcn9s': No such file or directory 00:28:57.844 [2024-11-17 01:49:06.268208] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:28:57.844 [2024-11-17 01:49:06.268243] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:28:57.844 [2024-11-17 01:49:06.268257] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:28:57.844 [2024-11-17 01:49:06.268272] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:57.844 [2024-11-17 01:49:06.268296] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:28:57.844 request: 00:28:57.844 { 00:28:57.844 "name": "nvme0", 00:28:57.844 "trtype": "tcp", 00:28:57.844 "traddr": "127.0.0.1", 00:28:57.844 "adrfam": "ipv4", 00:28:57.844 "trsvcid": "4420", 00:28:57.844 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.844 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:57.844 "prchk_reftag": false, 00:28:57.844 "prchk_guard": false, 00:28:57.844 "hdgst": false, 00:28:57.844 "ddgst": false, 00:28:57.844 "psk": "key0", 00:28:57.844 "allow_unrecognized_csi": false, 00:28:57.844 "method": "bdev_nvme_attach_controller", 00:28:57.844 "req_id": 1 00:28:57.844 } 00:28:57.844 Got JSON-RPC error response 00:28:57.844 response: 00:28:57.844 { 00:28:57.844 "code": -19, 00:28:57.844 "message": "No such device" 00:28:57.844 } 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.844 01:49:06 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.844 01:49:06 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:28:57.844 01:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:58.103 01:49:06 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@17 -- # name=key0 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@17 -- # digest=0 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@18 -- # mktemp 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Mi3NId06AW 00:28:58.103 01:49:06 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:28:58.103 01:49:06 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:28:58.103 01:49:06 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:28:58.103 01:49:06 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:28:58.103 01:49:06 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:28:58.103 01:49:06 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:28:58.103 01:49:06 keyring_file -- nvmf/common.sh@733 -- # python - 00:28:58.362 01:49:06 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Mi3NId06AW 00:28:58.362 01:49:06 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Mi3NId06AW 00:28:58.362 01:49:06 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Mi3NId06AW 00:28:58.362 01:49:06 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mi3NId06AW 00:28:58.362 01:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Mi3NId06AW 00:28:58.362 01:49:06 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.362 01:49:06 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:28:58.929 nvme0n1 00:28:58.929 01:49:07 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:28:58.929 01:49:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:58.929 01:49:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:58.929 01:49:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:58.929 01:49:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:58.929 01:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:58.929 01:49:07 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:28:58.929 01:49:07 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:28:58.929 01:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:28:59.188 01:49:07 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:28:59.188 01:49:07 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:28:59.188 01:49:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.188 01:49:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.188 01:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.446 01:49:07 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:28:59.446 01:49:07 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:28:59.446 01:49:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:28:59.446 01:49:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:28:59.446 01:49:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:28:59.446 01:49:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:28:59.446 01:49:07 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.705 01:49:08 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:28:59.705 01:49:08 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:28:59.705 01:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:28:59.963 01:49:08 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:28:59.963 01:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:28:59.963 01:49:08 keyring_file -- keyring/file.sh@105 -- # jq length 00:29:00.221 01:49:08 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:29:00.221 01:49:08 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Mi3NId06AW 00:29:00.221 01:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Mi3NId06AW 00:29:00.480 01:49:08 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.wsmFklMsRV 00:29:00.480 01:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.wsmFklMsRV 00:29:00.739 01:49:08 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:00.739 01:49:08 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:00.996 nvme0n1 00:29:00.996 01:49:09 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:29:00.996 01:49:09 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:01.255 01:49:09 keyring_file -- keyring/file.sh@113 -- # config='{ 00:29:01.255 "subsystems": [ 00:29:01.255 { 00:29:01.255 "subsystem": "keyring", 00:29:01.255 "config": [ 00:29:01.255 { 00:29:01.255 "method": "keyring_file_add_key", 00:29:01.255 "params": { 00:29:01.255 "name": "key0", 00:29:01.255 "path": "/tmp/tmp.Mi3NId06AW" 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "keyring_file_add_key", 00:29:01.255 "params": { 00:29:01.255 "name": "key1", 00:29:01.255 "path": "/tmp/tmp.wsmFklMsRV" 00:29:01.255 } 00:29:01.255 } 00:29:01.255 ] 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "subsystem": "iobuf", 00:29:01.255 "config": [ 00:29:01.255 { 00:29:01.255 "method": "iobuf_set_options", 00:29:01.255 "params": { 00:29:01.255 "small_pool_count": 8192, 00:29:01.255 "large_pool_count": 1024, 00:29:01.255 "small_bufsize": 8192, 00:29:01.255 "large_bufsize": 135168, 00:29:01.255 "enable_numa": false 00:29:01.255 } 00:29:01.255 } 00:29:01.255 ] 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "subsystem": "sock", 00:29:01.255 "config": [ 00:29:01.255 { 00:29:01.255 "method": "sock_set_default_impl", 00:29:01.255 "params": { 00:29:01.255 "impl_name": "uring" 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "sock_impl_set_options", 00:29:01.255 "params": { 00:29:01.255 "impl_name": "ssl", 00:29:01.255 "recv_buf_size": 4096, 00:29:01.255 "send_buf_size": 4096, 00:29:01.255 "enable_recv_pipe": true, 00:29:01.255 "enable_quickack": false, 00:29:01.255 "enable_placement_id": 0, 00:29:01.255 "enable_zerocopy_send_server": true, 00:29:01.255 "enable_zerocopy_send_client": false, 00:29:01.255 "zerocopy_threshold": 0, 00:29:01.255 "tls_version": 0, 00:29:01.255 "enable_ktls": false 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "sock_impl_set_options", 00:29:01.255 "params": { 00:29:01.255 "impl_name": "posix", 00:29:01.255 "recv_buf_size": 2097152, 00:29:01.255 "send_buf_size": 2097152, 00:29:01.255 "enable_recv_pipe": true, 00:29:01.255 "enable_quickack": false, 00:29:01.255 "enable_placement_id": 0, 00:29:01.255 "enable_zerocopy_send_server": true, 00:29:01.255 "enable_zerocopy_send_client": false, 00:29:01.255 "zerocopy_threshold": 0, 00:29:01.255 "tls_version": 0, 00:29:01.255 "enable_ktls": false 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "sock_impl_set_options", 00:29:01.255 "params": { 00:29:01.255 "impl_name": "uring", 00:29:01.255 "recv_buf_size": 2097152, 00:29:01.255 "send_buf_size": 2097152, 00:29:01.255 "enable_recv_pipe": true, 00:29:01.255 "enable_quickack": false, 00:29:01.255 "enable_placement_id": 0, 00:29:01.255 "enable_zerocopy_send_server": false, 00:29:01.255 "enable_zerocopy_send_client": false, 00:29:01.255 "zerocopy_threshold": 0, 00:29:01.255 "tls_version": 0, 00:29:01.255 "enable_ktls": false 00:29:01.255 } 00:29:01.255 } 00:29:01.255 ] 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "subsystem": "vmd", 00:29:01.255 "config": [] 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "subsystem": "accel", 00:29:01.255 "config": [ 00:29:01.255 { 00:29:01.255 "method": "accel_set_options", 00:29:01.255 "params": { 00:29:01.255 "small_cache_size": 128, 00:29:01.255 "large_cache_size": 16, 00:29:01.255 "task_count": 2048, 00:29:01.255 "sequence_count": 2048, 00:29:01.255 "buf_count": 2048 00:29:01.255 } 00:29:01.255 } 00:29:01.255 ] 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "subsystem": "bdev", 00:29:01.255 "config": [ 00:29:01.255 { 00:29:01.255 "method": "bdev_set_options", 00:29:01.255 "params": { 00:29:01.255 "bdev_io_pool_size": 65535, 00:29:01.255 "bdev_io_cache_size": 256, 00:29:01.255 "bdev_auto_examine": true, 00:29:01.255 "iobuf_small_cache_size": 128, 00:29:01.255 "iobuf_large_cache_size": 16 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "bdev_raid_set_options", 00:29:01.255 "params": { 00:29:01.255 "process_window_size_kb": 1024, 00:29:01.255 "process_max_bandwidth_mb_sec": 0 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "bdev_iscsi_set_options", 00:29:01.255 "params": { 00:29:01.255 "timeout_sec": 30 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "bdev_nvme_set_options", 00:29:01.255 "params": { 00:29:01.255 "action_on_timeout": "none", 00:29:01.255 "timeout_us": 0, 00:29:01.255 "timeout_admin_us": 0, 00:29:01.255 "keep_alive_timeout_ms": 10000, 00:29:01.255 "arbitration_burst": 0, 00:29:01.255 "low_priority_weight": 0, 00:29:01.255 "medium_priority_weight": 0, 00:29:01.255 "high_priority_weight": 0, 00:29:01.255 "nvme_adminq_poll_period_us": 10000, 00:29:01.255 "nvme_ioq_poll_period_us": 0, 00:29:01.255 "io_queue_requests": 512, 00:29:01.255 "delay_cmd_submit": true, 00:29:01.255 "transport_retry_count": 4, 00:29:01.255 "bdev_retry_count": 3, 00:29:01.255 "transport_ack_timeout": 0, 00:29:01.255 "ctrlr_loss_timeout_sec": 0, 00:29:01.255 "reconnect_delay_sec": 0, 00:29:01.255 "fast_io_fail_timeout_sec": 0, 00:29:01.255 "disable_auto_failback": false, 00:29:01.255 "generate_uuids": false, 00:29:01.255 "transport_tos": 0, 00:29:01.255 "nvme_error_stat": false, 00:29:01.255 "rdma_srq_size": 0, 00:29:01.255 "io_path_stat": false, 00:29:01.255 "allow_accel_sequence": false, 00:29:01.255 "rdma_max_cq_size": 0, 00:29:01.255 "rdma_cm_event_timeout_ms": 0, 00:29:01.255 "dhchap_digests": [ 00:29:01.255 "sha256", 00:29:01.255 "sha384", 00:29:01.255 "sha512" 00:29:01.255 ], 00:29:01.255 "dhchap_dhgroups": [ 00:29:01.255 "null", 00:29:01.255 "ffdhe2048", 00:29:01.255 "ffdhe3072", 00:29:01.255 "ffdhe4096", 00:29:01.255 "ffdhe6144", 00:29:01.255 "ffdhe8192" 00:29:01.255 ] 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "bdev_nvme_attach_controller", 00:29:01.255 "params": { 00:29:01.255 "name": "nvme0", 00:29:01.255 "trtype": "TCP", 00:29:01.255 "adrfam": "IPv4", 00:29:01.255 "traddr": "127.0.0.1", 00:29:01.255 "trsvcid": "4420", 00:29:01.255 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:01.255 "prchk_reftag": false, 00:29:01.255 "prchk_guard": false, 00:29:01.255 "ctrlr_loss_timeout_sec": 0, 00:29:01.255 "reconnect_delay_sec": 0, 00:29:01.255 "fast_io_fail_timeout_sec": 0, 00:29:01.255 "psk": "key0", 00:29:01.255 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:01.255 "hdgst": false, 00:29:01.255 "ddgst": false, 00:29:01.255 "multipath": "multipath" 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "bdev_nvme_set_hotplug", 00:29:01.255 "params": { 00:29:01.255 "period_us": 100000, 00:29:01.255 "enable": false 00:29:01.255 } 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "method": "bdev_wait_for_examine" 00:29:01.255 } 00:29:01.255 ] 00:29:01.255 }, 00:29:01.255 { 00:29:01.255 "subsystem": "nbd", 00:29:01.255 "config": [] 00:29:01.255 } 00:29:01.255 ] 00:29:01.255 }' 00:29:01.255 01:49:09 keyring_file -- keyring/file.sh@115 -- # killprocess 91552 00:29:01.255 01:49:09 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91552 ']' 00:29:01.255 01:49:09 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91552 00:29:01.255 01:49:09 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:01.255 01:49:09 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.255 01:49:09 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91552 00:29:01.255 killing process with pid 91552 00:29:01.255 Received shutdown signal, test time was about 1.000000 seconds 00:29:01.255 00:29:01.255 Latency(us) 00:29:01.255 [2024-11-17T01:49:09.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:01.255 [2024-11-17T01:49:09.714Z] =================================================================================================================== 00:29:01.255 [2024-11-17T01:49:09.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:01.256 01:49:09 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:01.256 01:49:09 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:01.256 01:49:09 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91552' 00:29:01.256 01:49:09 keyring_file -- common/autotest_common.sh@973 -- # kill 91552 00:29:01.256 01:49:09 keyring_file -- common/autotest_common.sh@978 -- # wait 91552 00:29:02.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:02.192 01:49:10 keyring_file -- keyring/file.sh@118 -- # bperfpid=91804 00:29:02.192 01:49:10 keyring_file -- keyring/file.sh@120 -- # waitforlisten 91804 /var/tmp/bperf.sock 00:29:02.192 01:49:10 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 91804 ']' 00:29:02.192 01:49:10 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:02.192 01:49:10 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:02.192 01:49:10 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:02.192 01:49:10 keyring_file -- keyring/file.sh@116 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:02.192 01:49:10 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:02.192 01:49:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:02.192 01:49:10 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:29:02.192 "subsystems": [ 00:29:02.192 { 00:29:02.192 "subsystem": "keyring", 00:29:02.192 "config": [ 00:29:02.192 { 00:29:02.192 "method": "keyring_file_add_key", 00:29:02.192 "params": { 00:29:02.192 "name": "key0", 00:29:02.192 "path": "/tmp/tmp.Mi3NId06AW" 00:29:02.192 } 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "method": "keyring_file_add_key", 00:29:02.192 "params": { 00:29:02.192 "name": "key1", 00:29:02.192 "path": "/tmp/tmp.wsmFklMsRV" 00:29:02.192 } 00:29:02.192 } 00:29:02.192 ] 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "subsystem": "iobuf", 00:29:02.192 "config": [ 00:29:02.192 { 00:29:02.192 "method": "iobuf_set_options", 00:29:02.192 "params": { 00:29:02.192 "small_pool_count": 8192, 00:29:02.192 "large_pool_count": 1024, 00:29:02.192 "small_bufsize": 8192, 00:29:02.192 "large_bufsize": 135168, 00:29:02.192 "enable_numa": false 00:29:02.192 } 00:29:02.192 } 00:29:02.192 ] 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "subsystem": "sock", 00:29:02.192 "config": [ 00:29:02.192 { 00:29:02.192 "method": "sock_set_default_impl", 00:29:02.192 "params": { 00:29:02.192 "impl_name": "uring" 00:29:02.192 } 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "method": "sock_impl_set_options", 00:29:02.192 "params": { 00:29:02.192 "impl_name": "ssl", 00:29:02.192 "recv_buf_size": 4096, 00:29:02.192 "send_buf_size": 4096, 00:29:02.192 "enable_recv_pipe": true, 00:29:02.192 "enable_quickack": false, 00:29:02.192 "enable_placement_id": 0, 00:29:02.192 "enable_zerocopy_send_server": true, 00:29:02.192 "enable_zerocopy_send_client": false, 00:29:02.192 "zerocopy_threshold": 0, 00:29:02.192 "tls_version": 0, 00:29:02.192 "enable_ktls": false 00:29:02.192 } 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "method": "sock_impl_set_options", 00:29:02.192 "params": { 00:29:02.192 "impl_name": "posix", 00:29:02.192 "recv_buf_size": 2097152, 00:29:02.192 "send_buf_size": 2097152, 00:29:02.192 "enable_recv_pipe": true, 00:29:02.192 "enable_quickack": false, 00:29:02.192 "enable_placement_id": 0, 00:29:02.192 "enable_zerocopy_send_server": true, 00:29:02.192 "enable_zerocopy_send_client": false, 00:29:02.192 "zerocopy_threshold": 0, 00:29:02.192 "tls_version": 0, 00:29:02.192 "enable_ktls": false 00:29:02.192 } 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "method": "sock_impl_set_options", 00:29:02.192 "params": { 00:29:02.192 "impl_name": "uring", 00:29:02.192 "recv_buf_size": 2097152, 00:29:02.192 "send_buf_size": 2097152, 00:29:02.192 "enable_recv_pipe": true, 00:29:02.192 "enable_quickack": false, 00:29:02.192 "enable_placement_id": 0, 00:29:02.192 "enable_zerocopy_send_server": false, 00:29:02.192 "enable_zerocopy_send_client": false, 00:29:02.192 "zerocopy_threshold": 0, 00:29:02.192 "tls_version": 0, 00:29:02.192 "enable_ktls": false 00:29:02.192 } 00:29:02.192 } 00:29:02.192 ] 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "subsystem": "vmd", 00:29:02.192 "config": [] 00:29:02.192 }, 00:29:02.192 { 00:29:02.192 "subsystem": "accel", 00:29:02.192 "config": [ 00:29:02.192 { 00:29:02.192 "method": "accel_set_options", 00:29:02.192 "params": { 00:29:02.192 "small_cache_size": 128, 00:29:02.192 "large_cache_size": 16, 00:29:02.192 "task_count": 2048, 00:29:02.192 "sequence_count": 2048, 00:29:02.192 "buf_count": 2048 00:29:02.193 } 00:29:02.193 } 00:29:02.193 ] 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "subsystem": "bdev", 00:29:02.193 "config": [ 00:29:02.193 { 00:29:02.193 "method": "bdev_set_options", 00:29:02.193 "params": { 00:29:02.193 "bdev_io_pool_size": 65535, 00:29:02.193 "bdev_io_cache_size": 256, 00:29:02.193 "bdev_auto_examine": true, 00:29:02.193 "iobuf_small_cache_size": 128, 00:29:02.193 "iobuf_large_cache_size": 16 00:29:02.193 } 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "method": "bdev_raid_set_options", 00:29:02.193 "params": { 00:29:02.193 "process_window_size_kb": 1024, 00:29:02.193 "process_max_bandwidth_mb_sec": 0 00:29:02.193 } 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "method": "bdev_iscsi_set_options", 00:29:02.193 "params": { 00:29:02.193 "timeout_sec": 30 00:29:02.193 } 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "method": "bdev_nvme_set_options", 00:29:02.193 "params": { 00:29:02.193 "action_on_timeout": "none", 00:29:02.193 "timeout_us": 0, 00:29:02.193 "timeout_admin_us": 0, 00:29:02.193 "keep_alive_timeout_ms": 10000, 00:29:02.193 "arbitration_burst": 0, 00:29:02.193 "low_priority_weight": 0, 00:29:02.193 "medium_priority_weight": 0, 00:29:02.193 "high_priority_weight": 0, 00:29:02.193 "nvme_adminq_poll_period_us": 10000, 00:29:02.193 "nvme_ioq_poll_period_us": 0, 00:29:02.193 "io_queue_requests": 512, 00:29:02.193 "delay_cmd_submit": true, 00:29:02.193 "transport_retry_count": 4, 00:29:02.193 "bdev_retry_count": 3, 00:29:02.193 "transport_ack_timeout": 0, 00:29:02.193 "ctrlr_loss_timeout_sec": 0, 00:29:02.193 "reconnect_delay_sec": 0, 00:29:02.193 "fast_io_fail_timeout_sec": 0, 00:29:02.193 "disable_auto_failback": false, 00:29:02.193 "generate_uuids": false, 00:29:02.193 "transport_tos": 0, 00:29:02.193 "nvme_error_stat": false, 00:29:02.193 "rdma_srq_size": 0, 00:29:02.193 "io_path_stat": false, 00:29:02.193 "allow_accel_sequence": false, 00:29:02.193 "rdma_max_cq_size": 0, 00:29:02.193 "rdma_cm_event_timeout_ms": 0, 00:29:02.193 "dhchap_digests": [ 00:29:02.193 "sha256", 00:29:02.193 "sha384", 00:29:02.193 "sha512" 00:29:02.193 ], 00:29:02.193 "dhchap_dhgroups": [ 00:29:02.193 "null", 00:29:02.193 "ffdhe2048", 00:29:02.193 "ffdhe3072", 00:29:02.193 "ffdhe4096", 00:29:02.193 "ffdhe6144", 00:29:02.193 "ffdhe8192" 00:29:02.193 ] 00:29:02.193 } 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "method": "bdev_nvme_attach_controller", 00:29:02.193 "params": { 00:29:02.193 "name": "nvme0", 00:29:02.193 "trtype": "TCP", 00:29:02.193 "adrfam": "IPv4", 00:29:02.193 "traddr": "127.0.0.1", 00:29:02.193 "trsvcid": "4420", 00:29:02.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.193 "prchk_reftag": false, 00:29:02.193 "prchk_guard": false, 00:29:02.193 "ctrlr_loss_timeout_sec": 0, 00:29:02.193 "reconnect_delay_sec": 0, 00:29:02.193 "fast_io_fail_timeout_sec": 0, 00:29:02.193 "psk": "key0", 00:29:02.193 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.193 "hdgst": false, 00:29:02.193 "ddgst": false, 00:29:02.193 "multipath": "multipath" 00:29:02.193 } 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "method": "bdev_nvme_set_hotplug", 00:29:02.193 "params": { 00:29:02.193 "period_us": 100000, 00:29:02.193 "enable": false 00:29:02.193 } 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "method": "bdev_wait_for_examine" 00:29:02.193 } 00:29:02.193 ] 00:29:02.193 }, 00:29:02.193 { 00:29:02.193 "subsystem": "nbd", 00:29:02.193 "config": [] 00:29:02.193 } 00:29:02.193 ] 00:29:02.193 }' 00:29:02.193 [2024-11-17 01:49:10.446569] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:02.193 [2024-11-17 01:49:10.447049] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91804 ] 00:29:02.193 [2024-11-17 01:49:10.627751] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:02.452 [2024-11-17 01:49:10.716485] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:02.711 [2024-11-17 01:49:10.942805] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:02.711 [2024-11-17 01:49:11.042995] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:02.969 01:49:11 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:02.969 01:49:11 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:29:02.969 01:49:11 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:29:02.969 01:49:11 keyring_file -- keyring/file.sh@121 -- # jq length 00:29:02.969 01:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.228 01:49:11 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:03.228 01:49:11 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:29:03.228 01:49:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:29:03.228 01:49:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.228 01:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:03.228 01:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.228 01:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.487 01:49:11 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:29:03.487 01:49:11 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:29:03.487 01:49:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:29:03.487 01:49:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:03.487 01:49:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:03.487 01:49:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:03.487 01:49:11 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:03.747 01:49:12 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:29:03.747 01:49:12 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:29:03.747 01:49:12 keyring_file -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:03.747 01:49:12 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:29:04.006 01:49:12 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:29:04.006 01:49:12 keyring_file -- keyring/file.sh@1 -- # cleanup 00:29:04.006 01:49:12 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Mi3NId06AW /tmp/tmp.wsmFklMsRV 00:29:04.006 01:49:12 keyring_file -- keyring/file.sh@20 -- # killprocess 91804 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91804 ']' 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91804 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91804 00:29:04.006 killing process with pid 91804 00:29:04.006 Received shutdown signal, test time was about 1.000000 seconds 00:29:04.006 00:29:04.006 Latency(us) 00:29:04.006 [2024-11-17T01:49:12.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.006 [2024-11-17T01:49:12.465Z] =================================================================================================================== 00:29:04.006 [2024-11-17T01:49:12.465Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91804' 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@973 -- # kill 91804 00:29:04.006 01:49:12 keyring_file -- common/autotest_common.sh@978 -- # wait 91804 00:29:04.943 01:49:13 keyring_file -- keyring/file.sh@21 -- # killprocess 91539 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 91539 ']' 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@958 -- # kill -0 91539 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@959 -- # uname 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91539 00:29:04.943 killing process with pid 91539 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91539' 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@973 -- # kill 91539 00:29:04.943 01:49:13 keyring_file -- common/autotest_common.sh@978 -- # wait 91539 00:29:06.850 ************************************ 00:29:06.850 END TEST keyring_file 00:29:06.850 ************************************ 00:29:06.850 00:29:06.850 real 0m17.767s 00:29:06.850 user 0m41.658s 00:29:06.850 sys 0m2.799s 00:29:06.850 01:49:14 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:06.850 01:49:14 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:29:06.850 01:49:14 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:29:06.850 01:49:14 -- spdk/autotest.sh@294 -- # run_test keyring_linux /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:06.850 01:49:14 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:06.850 01:49:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:06.850 01:49:14 -- common/autotest_common.sh@10 -- # set +x 00:29:06.850 ************************************ 00:29:06.850 START TEST keyring_linux 00:29:06.850 ************************************ 00:29:06.850 01:49:14 keyring_linux -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/scripts/keyctl-session-wrapper /home/vagrant/spdk_repo/spdk/test/keyring/linux.sh 00:29:06.850 Joined session keyring: 265927874 00:29:06.850 * Looking for test storage... 00:29:06.850 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:29:06.850 01:49:14 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:06.850 01:49:14 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:29:06.850 01:49:14 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:06.850 01:49:15 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@345 -- # : 1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@368 -- # return 0 00:29:06.850 01:49:15 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:06.850 01:49:15 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:06.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.850 --rc genhtml_branch_coverage=1 00:29:06.850 --rc genhtml_function_coverage=1 00:29:06.850 --rc genhtml_legend=1 00:29:06.850 --rc geninfo_all_blocks=1 00:29:06.850 --rc geninfo_unexecuted_blocks=1 00:29:06.850 00:29:06.850 ' 00:29:06.850 01:49:15 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:06.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.850 --rc genhtml_branch_coverage=1 00:29:06.850 --rc genhtml_function_coverage=1 00:29:06.850 --rc genhtml_legend=1 00:29:06.850 --rc geninfo_all_blocks=1 00:29:06.850 --rc geninfo_unexecuted_blocks=1 00:29:06.850 00:29:06.850 ' 00:29:06.850 01:49:15 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:06.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.850 --rc genhtml_branch_coverage=1 00:29:06.850 --rc genhtml_function_coverage=1 00:29:06.850 --rc genhtml_legend=1 00:29:06.850 --rc geninfo_all_blocks=1 00:29:06.850 --rc geninfo_unexecuted_blocks=1 00:29:06.850 00:29:06.850 ' 00:29:06.850 01:49:15 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:06.850 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:06.850 --rc genhtml_branch_coverage=1 00:29:06.850 --rc genhtml_function_coverage=1 00:29:06.850 --rc genhtml_legend=1 00:29:06.850 --rc geninfo_all_blocks=1 00:29:06.850 --rc geninfo_unexecuted_blocks=1 00:29:06.850 00:29:06.850 ' 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5af99618-86f8-46bf-8130-da23f42c5a81 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5af99618-86f8-46bf-8130-da23f42c5a81 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:06.850 01:49:15 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:06.850 01:49:15 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.850 01:49:15 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.850 01:49:15 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.850 01:49:15 keyring_linux -- paths/export.sh@5 -- # export PATH 00:29:06.850 01:49:15 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:06.850 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:29:06.850 01:49:15 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:06.850 01:49:15 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:06.850 01:49:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:29:06.850 /tmp/:spdk-test:key0 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:29:06.851 01:49:15 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:06.851 01:49:15 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:06.851 01:49:15 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:29:06.851 01:49:15 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:29:06.851 01:49:15 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:29:06.851 01:49:15 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:29:06.851 01:49:15 keyring_linux -- nvmf/common.sh@733 -- # python - 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:29:06.851 /tmp/:spdk-test:key1 00:29:06.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:06.851 01:49:15 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:29:06.851 01:49:15 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=91944 00:29:06.851 01:49:15 keyring_linux -- keyring/linux.sh@50 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:06.851 01:49:15 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 91944 00:29:06.851 01:49:15 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 91944 ']' 00:29:06.851 01:49:15 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:06.851 01:49:15 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:06.851 01:49:15 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:06.851 01:49:15 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:06.851 01:49:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:06.851 [2024-11-17 01:49:15.289527] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:06.851 [2024-11-17 01:49:15.289979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91944 ] 00:29:07.110 [2024-11-17 01:49:15.453937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:07.110 [2024-11-17 01:49:15.531291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:07.368 [2024-11-17 01:49:15.707791] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:07.937 01:49:16 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:07.937 [2024-11-17 01:49:16.198910] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.937 null0 00:29:07.937 [2024-11-17 01:49:16.230888] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:07.937 [2024-11-17 01:49:16.231126] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.937 01:49:16 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:29:07.937 654496212 00:29:07.937 01:49:16 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:29:07.937 22366720 00:29:07.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:07.937 01:49:16 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=91962 00:29:07.937 01:49:16 keyring_linux -- keyring/linux.sh@68 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:29:07.937 01:49:16 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 91962 /var/tmp/bperf.sock 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 91962 ']' 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.937 01:49:16 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:07.937 [2024-11-17 01:49:16.346000] Starting SPDK v25.01-pre git sha1 83e8405e4 / DPDK 24.03.0 initialization... 00:29:07.937 [2024-11-17 01:49:16.346325] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91962 ] 00:29:08.196 [2024-11-17 01:49:16.504713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.196 [2024-11-17 01:49:16.589805] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.133 01:49:17 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:09.133 01:49:17 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:29:09.133 01:49:17 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:29:09.133 01:49:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:29:09.392 01:49:17 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:29:09.392 01:49:17 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:09.651 [2024-11-17 01:49:17.935977] sock.c: 25:sock_subsystem_init: *NOTICE*: Default socket implementation override: uring 00:29:09.651 01:49:18 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:09.651 01:49:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:29:09.911 [2024-11-17 01:49:18.232345] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:09.911 nvme0n1 00:29:09.911 01:49:18 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:29:09.911 01:49:18 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:29:09.911 01:49:18 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:09.911 01:49:18 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:09.911 01:49:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.911 01:49:18 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:10.171 01:49:18 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:29:10.171 01:49:18 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:10.171 01:49:18 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:29:10.171 01:49:18 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:29:10.171 01:49:18 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.171 01:49:18 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.171 01:49:18 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@25 -- # sn=654496212 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@26 -- # [[ 654496212 == \6\5\4\4\9\6\2\1\2 ]] 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 654496212 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:29:10.430 01:49:18 keyring_linux -- keyring/linux.sh@79 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:10.689 Running I/O for 1 seconds... 00:29:11.625 10329.00 IOPS, 40.35 MiB/s 00:29:11.625 Latency(us) 00:29:11.625 [2024-11-17T01:49:20.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.625 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:11.625 nvme0n1 : 1.01 10342.54 40.40 0.00 0.00 12300.52 3961.95 16801.05 00:29:11.625 [2024-11-17T01:49:20.084Z] =================================================================================================================== 00:29:11.625 [2024-11-17T01:49:20.084Z] Total : 10342.54 40.40 0.00 0.00 12300.52 3961.95 16801.05 00:29:11.625 { 00:29:11.625 "results": [ 00:29:11.625 { 00:29:11.625 "job": "nvme0n1", 00:29:11.625 "core_mask": "0x2", 00:29:11.625 "workload": "randread", 00:29:11.625 "status": "finished", 00:29:11.625 "queue_depth": 128, 00:29:11.625 "io_size": 4096, 00:29:11.625 "runtime": 1.011164, 00:29:11.625 "iops": 10342.535928889873, 00:29:11.625 "mibps": 40.400530972226065, 00:29:11.625 "io_failed": 0, 00:29:11.625 "io_timeout": 0, 00:29:11.625 "avg_latency_us": 12300.516089639945, 00:29:11.625 "min_latency_us": 3961.949090909091, 00:29:11.625 "max_latency_us": 16801.04727272727 00:29:11.625 } 00:29:11.625 ], 00:29:11.625 "core_count": 1 00:29:11.625 } 00:29:11.625 01:49:20 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:11.625 01:49:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:11.884 01:49:20 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:29:11.884 01:49:20 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:29:11.884 01:49:20 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:29:11.884 01:49:20 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:29:11.884 01:49:20 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:29:11.884 01:49:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.142 01:49:20 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:29:12.143 01:49:20 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:29:12.143 01:49:20 keyring_linux -- keyring/linux.sh@23 -- # return 00:29:12.143 01:49:20 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:12.143 01:49:20 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:12.143 01:49:20 keyring_linux -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:29:12.402 [2024-11-17 01:49:20.788296] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:12.402 [2024-11-17 01:49:20.788600] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (107): Transport endpoint is not connected 00:29:12.402 [2024-11-17 01:49:20.789568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x615000030280 (9): Bad file descriptor 00:29:12.402 [2024-11-17 01:49:20.790560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:29:12.402 [2024-11-17 01:49:20.790604] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:12.402 [2024-11-17 01:49:20.790656] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:29:12.402 [2024-11-17 01:49:20.790686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:29:12.402 request: 00:29:12.402 { 00:29:12.402 "name": "nvme0", 00:29:12.402 "trtype": "tcp", 00:29:12.402 "traddr": "127.0.0.1", 00:29:12.402 "adrfam": "ipv4", 00:29:12.402 "trsvcid": "4420", 00:29:12.402 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:12.402 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:12.402 "prchk_reftag": false, 00:29:12.402 "prchk_guard": false, 00:29:12.402 "hdgst": false, 00:29:12.402 "ddgst": false, 00:29:12.402 "psk": ":spdk-test:key1", 00:29:12.402 "allow_unrecognized_csi": false, 00:29:12.402 "method": "bdev_nvme_attach_controller", 00:29:12.402 "req_id": 1 00:29:12.402 } 00:29:12.402 Got JSON-RPC error response 00:29:12.402 response: 00:29:12.402 { 00:29:12.402 "code": -5, 00:29:12.402 "message": "Input/output error" 00:29:12.402 } 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@33 -- # sn=654496212 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 654496212 00:29:12.402 1 links removed 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@33 -- # sn=22366720 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 22366720 00:29:12.402 1 links removed 00:29:12.402 01:49:20 keyring_linux -- keyring/linux.sh@41 -- # killprocess 91962 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 91962 ']' 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 91962 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91962 00:29:12.402 killing process with pid 91962 00:29:12.402 Received shutdown signal, test time was about 1.000000 seconds 00:29:12.402 00:29:12.402 Latency(us) 00:29:12.402 [2024-11-17T01:49:20.861Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.402 [2024-11-17T01:49:20.861Z] =================================================================================================================== 00:29:12.402 [2024-11-17T01:49:20.861Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91962' 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@973 -- # kill 91962 00:29:12.402 01:49:20 keyring_linux -- common/autotest_common.sh@978 -- # wait 91962 00:29:13.340 01:49:21 keyring_linux -- keyring/linux.sh@42 -- # killprocess 91944 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 91944 ']' 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 91944 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 91944 00:29:13.340 killing process with pid 91944 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 91944' 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@973 -- # kill 91944 00:29:13.340 01:49:21 keyring_linux -- common/autotest_common.sh@978 -- # wait 91944 00:29:15.244 ************************************ 00:29:15.244 END TEST keyring_linux 00:29:15.244 ************************************ 00:29:15.244 00:29:15.244 real 0m8.385s 00:29:15.244 user 0m15.214s 00:29:15.244 sys 0m1.493s 00:29:15.244 01:49:23 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:15.244 01:49:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:29:15.244 01:49:23 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:29:15.244 01:49:23 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:29:15.244 01:49:23 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:15.244 01:49:23 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:29:15.244 01:49:23 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:15.244 01:49:23 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:29:15.245 01:49:23 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:29:15.245 01:49:23 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:29:15.245 01:49:23 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:15.245 01:49:23 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:29:15.245 01:49:23 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:29:15.245 01:49:23 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:29:15.245 01:49:23 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:29:15.245 01:49:23 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:29:15.245 01:49:23 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:29:15.245 01:49:23 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:29:15.245 01:49:23 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:29:15.245 01:49:23 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.245 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:29:15.245 01:49:23 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:29:15.245 01:49:23 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:29:15.245 01:49:23 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:29:15.245 01:49:23 -- common/autotest_common.sh@10 -- # set +x 00:29:17.160 INFO: APP EXITING 00:29:17.160 INFO: killing all VMs 00:29:17.160 INFO: killing vhost app 00:29:17.160 INFO: EXIT DONE 00:29:17.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:17.420 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:29:17.420 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:29:18.357 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:29:18.357 Cleaning 00:29:18.357 Removing: /var/run/dpdk/spdk0/config 00:29:18.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:18.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:18.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:18.357 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:18.357 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:18.357 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:18.357 Removing: /var/run/dpdk/spdk1/config 00:29:18.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:18.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:18.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:18.357 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:18.357 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:18.357 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:18.357 Removing: /var/run/dpdk/spdk2/config 00:29:18.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:18.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:18.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:18.357 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:18.357 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:18.357 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:18.357 Removing: /var/run/dpdk/spdk3/config 00:29:18.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:18.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:18.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:18.357 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:18.357 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:18.357 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:18.357 Removing: /var/run/dpdk/spdk4/config 00:29:18.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:18.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:18.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:18.357 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:18.357 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:18.357 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:18.357 Removing: /dev/shm/nvmf_trace.0 00:29:18.357 Removing: /dev/shm/spdk_tgt_trace.pid57413 00:29:18.357 Removing: /var/run/dpdk/spdk0 00:29:18.357 Removing: /var/run/dpdk/spdk1 00:29:18.357 Removing: /var/run/dpdk/spdk2 00:29:18.357 Removing: /var/run/dpdk/spdk3 00:29:18.357 Removing: /var/run/dpdk/spdk4 00:29:18.357 Removing: /var/run/dpdk/spdk_pid57205 00:29:18.357 Removing: /var/run/dpdk/spdk_pid57413 00:29:18.358 Removing: /var/run/dpdk/spdk_pid57636 00:29:18.358 Removing: /var/run/dpdk/spdk_pid57735 00:29:18.358 Removing: /var/run/dpdk/spdk_pid57780 00:29:18.358 Removing: /var/run/dpdk/spdk_pid57908 00:29:18.358 Removing: /var/run/dpdk/spdk_pid57926 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58085 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58299 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58465 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58560 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58666 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58777 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58874 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58919 00:29:18.358 Removing: /var/run/dpdk/spdk_pid58950 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59027 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59133 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59597 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59661 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59724 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59745 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59877 00:29:18.358 Removing: /var/run/dpdk/spdk_pid59893 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60040 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60056 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60115 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60140 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60198 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60216 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60393 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60430 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60517 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60870 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60883 00:29:18.358 Removing: /var/run/dpdk/spdk_pid60926 00:29:18.617 Removing: /var/run/dpdk/spdk_pid60957 00:29:18.617 Removing: /var/run/dpdk/spdk_pid60979 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61010 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61030 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61063 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61093 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61114 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61142 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61173 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61197 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61220 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61251 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61277 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61299 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61330 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61355 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61383 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61420 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61451 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61487 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61571 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61606 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61633 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61668 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61695 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61709 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61764 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61789 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61830 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61851 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61873 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61893 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61910 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61932 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61953 00:29:18.617 Removing: /var/run/dpdk/spdk_pid61975 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62015 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62054 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62070 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62116 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62132 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62157 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62204 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62222 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62266 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62280 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62300 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62319 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62334 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62357 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62372 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62392 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62485 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62562 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62725 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62765 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62822 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62855 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62884 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62910 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62956 00:29:18.617 Removing: /var/run/dpdk/spdk_pid62979 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63068 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63107 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63180 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63291 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63381 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63433 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63556 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63610 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63660 00:29:18.617 Removing: /var/run/dpdk/spdk_pid63910 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64023 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64069 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64105 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64145 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64196 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64246 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64285 00:29:18.617 Removing: /var/run/dpdk/spdk_pid64693 00:29:18.877 Removing: /var/run/dpdk/spdk_pid64732 00:29:18.877 Removing: /var/run/dpdk/spdk_pid65102 00:29:18.877 Removing: /var/run/dpdk/spdk_pid65576 00:29:18.877 Removing: /var/run/dpdk/spdk_pid65861 00:29:18.877 Removing: /var/run/dpdk/spdk_pid66782 00:29:18.877 Removing: /var/run/dpdk/spdk_pid67742 00:29:18.877 Removing: /var/run/dpdk/spdk_pid67876 00:29:18.877 Removing: /var/run/dpdk/spdk_pid67946 00:29:18.877 Removing: /var/run/dpdk/spdk_pid69416 00:29:18.877 Removing: /var/run/dpdk/spdk_pid69784 00:29:18.877 Removing: /var/run/dpdk/spdk_pid73553 00:29:18.877 Removing: /var/run/dpdk/spdk_pid73940 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74051 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74196 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74237 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74272 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74310 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74423 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74571 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74760 00:29:18.877 Removing: /var/run/dpdk/spdk_pid74849 00:29:18.877 Removing: /var/run/dpdk/spdk_pid75062 00:29:18.877 Removing: /var/run/dpdk/spdk_pid75163 00:29:18.877 Removing: /var/run/dpdk/spdk_pid75274 00:29:18.877 Removing: /var/run/dpdk/spdk_pid75655 00:29:18.877 Removing: /var/run/dpdk/spdk_pid76088 00:29:18.877 Removing: /var/run/dpdk/spdk_pid76089 00:29:18.877 Removing: /var/run/dpdk/spdk_pid76090 00:29:18.877 Removing: /var/run/dpdk/spdk_pid76371 00:29:18.877 Removing: /var/run/dpdk/spdk_pid76661 00:29:18.877 Removing: /var/run/dpdk/spdk_pid76664 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79018 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79437 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79446 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79784 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79804 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79825 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79859 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79869 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79955 00:29:18.877 Removing: /var/run/dpdk/spdk_pid79962 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80067 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80076 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80185 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80188 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80638 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80680 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80790 00:29:18.877 Removing: /var/run/dpdk/spdk_pid80858 00:29:18.877 Removing: /var/run/dpdk/spdk_pid81224 00:29:18.877 Removing: /var/run/dpdk/spdk_pid81433 00:29:18.877 Removing: /var/run/dpdk/spdk_pid81871 00:29:18.877 Removing: /var/run/dpdk/spdk_pid82448 00:29:18.877 Removing: /var/run/dpdk/spdk_pid83320 00:29:18.877 Removing: /var/run/dpdk/spdk_pid83976 00:29:18.877 Removing: /var/run/dpdk/spdk_pid83983 00:29:18.877 Removing: /var/run/dpdk/spdk_pid85998 00:29:18.877 Removing: /var/run/dpdk/spdk_pid86066 00:29:18.877 Removing: /var/run/dpdk/spdk_pid86134 00:29:18.877 Removing: /var/run/dpdk/spdk_pid86201 00:29:18.877 Removing: /var/run/dpdk/spdk_pid86335 00:29:18.878 Removing: /var/run/dpdk/spdk_pid86402 00:29:18.878 Removing: /var/run/dpdk/spdk_pid86468 00:29:18.878 Removing: /var/run/dpdk/spdk_pid86525 00:29:18.878 Removing: /var/run/dpdk/spdk_pid86908 00:29:18.878 Removing: /var/run/dpdk/spdk_pid88138 00:29:18.878 Removing: /var/run/dpdk/spdk_pid88291 00:29:18.878 Removing: /var/run/dpdk/spdk_pid88539 00:29:18.878 Removing: /var/run/dpdk/spdk_pid89147 00:29:18.878 Removing: /var/run/dpdk/spdk_pid89311 00:29:18.878 Removing: /var/run/dpdk/spdk_pid89471 00:29:18.878 Removing: /var/run/dpdk/spdk_pid89569 00:29:18.878 Removing: /var/run/dpdk/spdk_pid89733 00:29:18.878 Removing: /var/run/dpdk/spdk_pid89842 00:29:18.878 Removing: /var/run/dpdk/spdk_pid90573 00:29:18.878 Removing: /var/run/dpdk/spdk_pid90608 00:29:18.878 Removing: /var/run/dpdk/spdk_pid90646 00:29:18.878 Removing: /var/run/dpdk/spdk_pid90999 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91035 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91072 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91539 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91552 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91804 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91944 00:29:19.137 Removing: /var/run/dpdk/spdk_pid91962 00:29:19.137 Clean 00:29:19.137 01:49:27 -- common/autotest_common.sh@1453 -- # return 0 00:29:19.137 01:49:27 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:29:19.137 01:49:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.137 01:49:27 -- common/autotest_common.sh@10 -- # set +x 00:29:19.137 01:49:27 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:29:19.137 01:49:27 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:19.137 01:49:27 -- common/autotest_common.sh@10 -- # set +x 00:29:19.137 01:49:27 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:19.137 01:49:27 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:29:19.137 01:49:27 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:29:19.137 01:49:27 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:29:19.137 01:49:27 -- spdk/autotest.sh@398 -- # hostname 00:29:19.137 01:49:27 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:29:19.396 geninfo: WARNING: invalid characters removed from testname! 00:29:45.945 01:49:51 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:46.204 01:49:54 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:48.740 01:49:57 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:51.273 01:49:59 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:53.829 01:50:02 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:56.436 01:50:04 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:29:58.972 01:50:07 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:58.972 01:50:07 -- spdk/autorun.sh@1 -- $ timing_finish 00:29:58.972 01:50:07 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:29:58.972 01:50:07 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:58.972 01:50:07 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:29:58.972 01:50:07 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:29:58.972 + [[ -n 5253 ]] 00:29:58.972 + sudo kill 5253 00:29:58.982 [Pipeline] } 00:29:58.996 [Pipeline] // timeout 00:29:59.001 [Pipeline] } 00:29:59.013 [Pipeline] // stage 00:29:59.018 [Pipeline] } 00:29:59.029 [Pipeline] // catchError 00:29:59.039 [Pipeline] stage 00:29:59.041 [Pipeline] { (Stop VM) 00:29:59.052 [Pipeline] sh 00:29:59.335 + vagrant halt 00:30:02.624 ==> default: Halting domain... 00:30:09.217 [Pipeline] sh 00:30:09.499 + vagrant destroy -f 00:30:12.034 ==> default: Removing domain... 00:30:12.305 [Pipeline] sh 00:30:12.587 + mv output /var/jenkins/workspace/nvmf-tcp-uring-vg-autotest/output 00:30:12.596 [Pipeline] } 00:30:12.611 [Pipeline] // stage 00:30:12.617 [Pipeline] } 00:30:12.633 [Pipeline] // dir 00:30:12.638 [Pipeline] } 00:30:12.655 [Pipeline] // wrap 00:30:12.662 [Pipeline] } 00:30:12.676 [Pipeline] // catchError 00:30:12.686 [Pipeline] stage 00:30:12.688 [Pipeline] { (Epilogue) 00:30:12.703 [Pipeline] sh 00:30:12.987 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:18.271 [Pipeline] catchError 00:30:18.274 [Pipeline] { 00:30:18.288 [Pipeline] sh 00:30:18.570 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:18.829 Artifacts sizes are good 00:30:18.838 [Pipeline] } 00:30:18.853 [Pipeline] // catchError 00:30:18.865 [Pipeline] archiveArtifacts 00:30:18.872 Archiving artifacts 00:30:19.017 [Pipeline] cleanWs 00:30:19.036 [WS-CLEANUP] Deleting project workspace... 00:30:19.036 [WS-CLEANUP] Deferred wipeout is used... 00:30:19.072 [WS-CLEANUP] done 00:30:19.075 [Pipeline] } 00:30:19.092 [Pipeline] // stage 00:30:19.098 [Pipeline] } 00:30:19.112 [Pipeline] // node 00:30:19.118 [Pipeline] End of Pipeline 00:30:19.158 Finished: SUCCESS